meet.hn/city/us-Baltimore
Socials: - x.com/uehreka - github.com/chrisuehlinger
---
I’m not the GP, but the reason I capitalize words instead of italicizing them is because the italics don’t look italic enough to convey emphasis. I get the feeling that that may be because HN wants to downplay emphasis in general, which if true is a bad goal that I oppose.
Also, those guidelines were written in the 2000s in a much different context and haven’t really evolved with the times. They seem out of date today, many of us just don’t consider them that relevant.
> It’s virtually impossible for me to estimate how long it will take to fix a bug, until the job is done.
In my experience there are two types of low-priority bugs (high-priority bugs just have to be fixed immediately no matter how easy or hard they are).
1. The kind where I facepalm and go “yup, I know exactly what that is”, though sometimes it’s too low of a priority to do it right now, and it ends up sitting on the backlog forever. This is the kind of bug the author wants to sweep for, they can often be wiped out in big batches by temporarily making bug-hunting the priority every once in a while.
2. The kind where I go “Hmm, that’s weird, that really shouldn’t happen.” These can be easy and turn into a facepalm after an hour of searching, or they can turn out to be brain-broiling heisenbugs that eat up tons of time, and it’s difficult to figure out which. If you wipe out a ton of category 1 bugs then trying to sift through this category for easy wins can be a good use of time.
And yeah, sometimes a category 1 bug turns out to be category 2, but that’s pretty unusual. This is definitely an area where the perfect is the enemy of the good, and I find this mental model to be pretty good.
Yeah, like, let’s say that in your compositing workflow you increase exposure then decrease brightness. If your working color space is too small, your highlights will clip when you increase exposure, then all land flat at the same level when you decrease brightness. If your working space is bigger than the gamut people can see, but your last step is to tone map into Display P3, you’ll appreciate the non-clipped highlights, even if your eyes could never comprehend what they looked like in the post-exposure-boost-pre-brightness-drop phase of the pipeline.
I design projections for independent theatre in Baltimore. I use AI in my workflows where it can help me and won’t compromise on the quality of what I’m making. I frequently use AI to upscale crappy footage, to interpolate frames in existing video (for artistic purposes, never with documentary archival stuff) and very occasionally to create wholesale clips in situations where video models can do what I need.
I recently used WAN to generate a looping clip of clouds moving quickly, something that’s difficult to do in CGI and impossible to capture live action. It worked out because I didn’t have specific demands other than what I just said, and I wasn’t asking for anything too obscure.
At this point, I expect the quality of local video models (the only kind I’m willing to work with professionally) to go up, but prompt adherence seems like a tough nut to crack, which makes me think it may be a while before we have prosumer models that can replace what I do in Blender.
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io