Very strange. I find reasoning has very narrow usefulness for me. It's great to get a project in context or to get oriented in the conversation, but on long conversations I find reasoning starts to add way too much extraneous stuff and get distracted from the task at hand.
I think my coding model ranking is something like Claude Code > Claude 4 raw > Gemini > big gap > o4-mini > o3
Great... we've been complaining about useless AI being forced upon us in the commercial sector and now YC alums are going to implement it in veteran healthcare through a backdoor?
I've been pretty neutral about the whole thing to date, but it's starting to seem like YC's AI blindness is actually explained by true evil capitalist intentions.