
Gen Z jobs aren’t dead yet: $240 billion tech giant IBM says it’s rewriting entry-level jobs—and tripling down on its hiring of young talent.
The job market has been a sore subject for Gen Z. The unemployment rate among young college grads sits at 5.6%, hovering near its highest level in more than a decade outside the pandemic. Meanwhile, prominent executives—from Anthropic’s Dario Amodei to Ford’s Jim Farley—have warned that artificial intelligence will slash corporate entry-level jobs.
But some companies are realizing that cutting young workers out of the pipeline isn’t a sustainable long-term strategy: $240 billion tech giant IBM just revealed it’s ramping up hiring of Gen Z.
“The companies three to five years from now that are going to be the most successful are those companies that doubled down on entry-level hiring in this environment,” Nickle LaMoreaux, IBM’s chief human resources officer, said this week.
“We are tripling our entry-level hiring, and yes, that is for software developers and all these jobs we’re being told AI can do.”
While she admitted that many of the responsibilities that previously defined entry-level jobs can now be automated, IBM has since rewritten its roles across sectors to account for AI fluency. For example, software engineers will spend less time on routine coding—and more on interacting with customers, and HR staffers will work more on intervening with chatbots, rather than having to answer every question.
The shift, LaMoreaux said, builds more durable skills for workers while creating greater long-term value for the company.
With job market conditions likely to stay tight for young candidates in 2026, applicants who show initiative and comfort with AI may be the ones who break through at companies like IBM. According to LinkedIn, AI literacy is now the fastest-growing skill in the U.S.
As AI increases pressure on companies to be leaner and more productive, early-career hiring has often looked like the simplest place to cut. A report from Korn Ferry found that 37% of organizations plan to replace early career roles with AI.
But while that strategy might be helpful with short-term financials, LaMoreaux argued, it could cause havoc in the future.
Reducing junior headcount risks creating an eventual shortage of mid-level managers. Attempting to poach talent from competitors is likely to be costlier, and outside hires tend to take longer to adapt to internal systems and culture.
That’s why, she said, HR leaders need to push back.
“Entry-level hires—it is your responsibility to make the case for that,” she said. “Build the business case now; even though it may not seem so obvious to your leaders, because AI is going to make your job easier three years from now.”
IBM CEO Arvind Krishna has already heard LaMoreaux’s plea and rejected the idea that AI should translate into fewer opportunities for graduates.
“People are talking about either layoffs or freezing hiring, but I actually want to say that we are the opposite,” Krishna told CNN in October. “I expect we are probably going to hire more people out of college over the next 12 months than we have in the past few years, so you’re going to see that.”
Just a week after his comments, however, IBM announced it would cut thousands of workers by the end of the year as it shifts focus to high-growth software and AI areas. A company spokesperson told Fortune at the time that the round of layoffs would impact a relatively low single-digit percentage of the company’s global workforce, and when combined with new hiring, would leave IBM’s U.S. headcount roughly flat.
Fortune reached out to IBM for further comment.
IBM isn’t alone in betting that younger workers may actually accelerate AI adoption. In fact, according to Melanie Rosenwasser, chief people officer at Dropbox, Gen Z are actually coming to work equipped with better AI skills than their older peers.
“It’s like they’re biking in the Tour de France and the rest of us still have training wheels,” Rosenwasser told Bloomberg. “Honestly, that’s how much they’re lapping us in proficiency.”
The file-sharing company is set to expand its internship and new graduate programs by 25% to capitalize on the AI fluency of younger workers.
Ravi Kumar S, CEO of IT firm Cognizant, similarly told Fortune last year that he would be creating more entry-level jobs owing to his bullish view of Gen Z.
“So many companies have a pyramid with the bottom where school graduates are. That pyramid is going to be broader and shorter, and the path to expertise is going to be faster,” he said.
“This year, we are hiring more school graduates than ever before. I can take a school graduate and give them the tooling so they can actually punch above their weight. AI is an amplifier of human potential. It’s not a displacement strategy.”
I pay $20 for OpenAI and codex makes me incredibly productive. With very careful prompts aimed at tiny tasks, I can review, fix and get a lot of things done.
I’ll happily pay up to $2k/month for it if I was left with no choice, but I don’t think it will ever get that expensive since you can run models locally and it could have the same result.
That being said, my outputs are similarish in the big picture. When I get something done, I typically don’t have the energy to keep going to get it to 2x or 3x because the cognitive load is about the same.
However I get a lot of time freed up which is amazing because I’m able to play golf 3-4 times a week which would have been impossible without AI.
Productive? Yes. Time saved? Yes. Overall outputs? Similar.
I would like to know what models people are running locally that get the same results as a $20/month ChatGPT plan
Same? Not quite as good as that. But google’s Gemma 3 27B is highly similar to their last Flash model. The latest Qwen3 variants are very good, to my need at least they are the best open coders, but really— here’s the thing:
There’s so many varieties, specialized to different tasks or simply different in performance.
Maybe we’ll get to a one-size fits all at some point, but for now trying out a few can pay off. It also starts to build a better sense of the ecosystem as a whole.
For running them: if you have an Nvidia GPU w/ 8GB of vram you’re probably able to run a bunch— quantized. It gets a bit esoteric when you start getting into quantization varieties but generally speaking you should find out the sort of integer & float math your gpu has optimized support for and then choose the largest quantized model that corresponds to support and still fits in vram. Most often that’s what will perform the best in both speed and quality, unless you need to run more than 1 model at a time.
To give you a reference point on model choice, performance, gpu, etc: one of my systems runs with an nvidia 4080 w/ 16GB VRAM. Using Qwen 3 Coder 30B, heavily quantized, I can get about 60 tokens per second.
I get tolerable performance out of a quantized gpt-oss 20b on an old RTX3050 I have kicking around (I want to say 20-30 tokens/s, or faster when cache is effective). It's appreciably faster on the 4060. It's not quite ideal for more interactive agentic coding on the 3050, but approaching it, and fitting nicely as a "coding in the background while I fiddle on something else" territory.
Just in case anyone hasn't seen this yet:
https://github.com/ggml-org/llama.cpp/discussions/15396 a guide for running gpt-oss on llama-server, with settings for various amounts of GPU memory, from 8GB on up
Yeah, tokens per second can very much influence the work style and therefore mindset a person should bring to usage. You can also build on the results of a faster but less than SOTA class model in different ways. I can let a coding tuned 7-12b model “sketch” some things at higher speed, or even a variety of things, and I can review real time, and pass off to a slower more capable model to say “this is structural sound, or at least the right framing, tighten it all up in the following ways…” and run in the background.
The run at home was in the context of $2k/mo. At that price you can get your money back on self-hosted hardware at a much more reasonable pace compared to 20/mo (or even 200).
Well theres an open source GPT model you can run locally. I dont think running models locally is all that cheap considering top of the line GPUs used to be $300 now you are lucky if you get the best GPU for under $2000. The better models require a lot more VRAM. Macs can run them pretty decently but now you are spending $5000 plus you could have just bought a rig with a 5090 with mediocre desktop ram because Sam Altman has ruined the RAM pricing market.
Mac can run larger models due to the unified memory architecture. Try building a 512GB nvidia VRAM machine. You basically can’t.
Fully aware, but who the heck wants to spend nearly 10 grand, and that's with just a 1TB hard drive (which needs to be able to fit your massive models mind you). Fair warning not ALL the RAM is fully unified. On my 24GB RAM Macbook Pro I can only use 16GB of VRAM, but its still better than me using my 3080 with only 10 GB of RAM, but I also didn't spend more than 2 grand on it.
I got some decent mileage out of aider and Gemma 27B. The one shot output was a little less good, but I don’t have to worry about paying per token or hitting plan limits so I felt more free to let it devise a plan, run it in a loop, etc.
Not having to worry about token limits is surprisingly cognitively freeing. I don’t have to worry about having a perfect prompt.
And what hardware they needed to run the model, because that's the real pinch in local inference.
There are no models that you can run locally that'll match a frontier LLM
Marx in his wildest nightmare couldn’t have anticipated the level selling short the working class does with the advent AI. Friend, you should be doing more than golf…
Bro, nobody wants to hear about the hustle anymore. We're in the second half of this decade now.
> nobody wants to hear about the hustle anymore
Plenty of people are still ambitious and being successful.
The title is a bit misleading. Reading the article, the argument seems to be that entry-level applicants (are expected to) have the highest AI literacy, so they want them to drive AI adoption.
At least today, I expect this will fail horribly. The challenge today isn't AI literacy in my experience, its domain knowledge required to keep LLMs on the rails.
People literate in AI, but inexperienced in all other facts. What could go wrong!
> People literate in AI, but inexperienced in all other facts. What could go wrong!
It sounds like it's appeal to MBAs, who are people literate in management, but inexperienced in all other areas.
ClawdBot Boardroom Edition
"Galactic scale" and "Fuck Up" are on brand for IBM.
It is IBM after all
Totally fair point.
dotcom implosion redux
It certainly feels that way. I was there. Fortunately had just waltzed into the tech side of things and scurried off back to my professional career for a couple of years.
I watched a lot of stuff burn. It was horrifying. We are nearly there again.
Yeah similar story here. I had to spend a couple of years painting houses before the local market recovered enough that tech jobs were a thing again. Shit was surreal. There was one guy I knew that went from building multi-million dollar server and networking projects for IBM to literally working as unskilled labor on a fencing crew just to make rent.
Problem is there aren't jobs where you can go and hide until the economy recovers this time.
For a time, there was a lot of good deals on nice used office furniture.
Yeah got a nice desk and a trinitron out of it. Covid got me an Aeron :)
I just run sub agents in parallel. Yesterday I used Codex for the first time yesterday. I spun up 350,640 agents and got 10 years of experience in 15 minutes.
New metric: agent-hours spent on a task. Or so we measure in tokens. Clearly more tokens burned == more experience right?
There are actually books which recommend that organizations track employee tokens burned as a proxy for AI adoption. Surprised me a bit.
it's the only KPI available.
Unpatchable xp glitch
You should also mention how many millions lines of code you* created.
25 years of LLM experience for a mid-level
"AI is going to wipe out junior developers!"
They actually hire more junior developers
"Uhh .. to adopt AI better they're hiring more junior developers!"
This cope is especially low quality with the context that this is just another purge of older workers at IBM.
Some stats are trickling out in my company. Code heavy consulting projects show about 18% efficiency gains but I have problems with that number because no one has been able to tell me how it was calculated. Story points actual vs estimated is probably how it was done but that’s nonsensical because we all know how subjective estimates and even actuals are. It’s probably impossible to get a real number that doesn’t have significant “well I feel about x% more efficient…”
More interesting imo would be a measure of maintainability. I've heard that code that's largely written by AI is rarely remembered by the engineer that submitted even a week after merging
You're almost "locked in" to using more AI on top of it then. It may also make it harder to give estimates to non-technical staff on how long it'd take to make a change or implement a new feature
I don’t know how to measure maintainability but the AI generated code I’ve seen in my projects is pretty plain vanilla standard patterns with comments. So less of a headache than a LOT of human code I’ve seen. Also, one thing the agents are good at, at least in my experience so far, is documenting existing code. This goes a long ways in maintenance, it’s not always perfect but as the saying goes documentation is like sex, when it’s good it’s great when it’s bad it’s better than nothing.
Something I occasionally do is ask it to extensively comment a section of code for me, and to tell me what it thinks the intent of the code was, which takes a lot of cognitive load off of me. It means I'm in the loop without shutting off my brain, as I do have to read the code and understand it, so I find it a sweet spot of LLM use.
by "maintainability" and "rarely remembered by the engineer" i'm assuming the bigger concern (beyond commenting and sane code) is once everyone starts producing tons of code without looking - and reading(reviewing) code is, to me at least, much harder than writing - then all of this goes unchecked:
* subtle footguns
* hallucinations
* things that were poorly or incompletely expressed in the prompt and ended up implemented incorrectly
* poor performance or security bugs
other things (probably correctable by fine-tuning the prompt and the context):
* lots of redundancy
* comments that are insulting to the intelligence (e.g., "here we instantiate a class")
* ...
not to mention reduced human understanding of the system and where it might break or how this implementation is likely to behave. All of this will come back to bite during maintenance.
I find it funny that we, collectively, are now okay with comments in the code.
I remember the general consensus on this _not even two years ago_ being that the code should speak for itself and that comments harm more than help.
This matters less when agentic tools are doing the maintenance, I suppose, but the backslide in this practice is interesting.
It's never been the consensus. As far back as I can remember, the wisdom was always to comment why the code does what it does if needed, and to avoid saying what the code does.
Saying that function "getUserByName" fetches a user by name is redundant. Saying that a certain method is called because of a quirk in a legacy system is important.
I regularly implement financial calculations. Not only do I leave comments everywhere, I tend to create a markdown file next to the function, to summarise and explain the context around the calculation. Just plain english, what it's supposed to do, the high level steps, etc.
> I remember the general consensus on this _not even two years ago_ being that the code should speak for itself and that comments harm more than help.
If that was the consensus, it was wrong. There are valuable kinds of comments (whys, warnings, etc) that code can never say.
I'd describe that as a trend, rather than a consensus.
It wasn't an entirely bad idea, because comments carry a high maintenance cost. They usually need to be rewritten when nearby code is edited, and they sometimes need to be rewritten when remote code is edited - a form of coupling which can't be checked by the compiler. It's easy to squander this high cost by writing comments which are more noise than signal.
However, there's plenty of useful information which can only be communicated using prose. "Avoid unnecessary comments" is a very good suggestion, but I think a lot of people over-corrected, distorting the message into "never write comments" or "comments are a code smell".
In context of the thread, that's because AI fixes the key problem with comments, because it maintains them when the code is updated.
yeah that was weird, it was like a cult and some coworkers of mine were religiously hunting down every comment in other people's MR's, just kinda assumed that "no comments" is a hard rule. Very strange, i had to fight many battles for my sanity. There are many cases where you may want to explain why this is coded the way this is coded, not just how.
chasd00 did mention that this was for consulting projects, where presumably there's a handover to another team after a period of time. Maintainability was never a high priority for consultants.
But in general I agree with your point.
> engineer that submitted it
This is a poor metric as soon as you reach a scale where you've hired an additional engineer, where 10% annual employee turnover reflects > 1 employee, much less the scale where a layoff is possible.
It's also only a hope as soon as you have dependencies that you don't directly manage like community libraries.
[dead]
Hint: Make sure the people giving you the efficiency improvement numbers don't have a vested interest in giving you good numbers. If so, you can not trust the numbers.
Reminds me of my last job where the team that pushed React Native into the codebase were the ones providing the metrics for "how well" React Native was going. Ain't no chance they'd ever provide bad numbers.
better than lines of code at least!