
Python reigns supreme again, but is AI changing the game for programming languages? Find out how coding is transforming.
Since 2013, we’ve been metaphorically peering over the shoulders of programmers to create our annual interactive rankings of the most popular programming languages. But fundamental shifts in how people are coding may not just make it harder to measure popularity, but could even make the concept itself irrelevant. And then things might get really weird. To see why, let’s start with this year’s rankings and a quick refresher of how we put this thing together.
In the “Spectrum” default ranking, which is weighted with the interests of IEEE members in mind, we see that once again Python has the top spot, with the biggest change in the top five being JavaScript’s drop from third place last year to sixth place this year. As JavaScript is often used to create web pages, and vibe coding is often used to create websites, this drop in the apparent popularity may be due to the effects of AI that we’ll dig into in a moment. But first to finish up with this year’s scores, in the “Jobs” ranking, which looks exclusively at what skills employers are looking for, we see that Python has also taken 1st place, up from second place last year, though SQL expertise remains an incredibly valuable skill to have on your resume.
Because we can’t literally look over the shoulders of everyone who codes, including kids hacking on Minecraft servers or academic researchers developing new architectures, we rely on proxies to measure popularity. We detail our methodology here, but the upshot is that we merge metrics from multiple sources to create our rankings. The metrics we choose publicly signal interest across a wide range of languages—Google search traffic, questions asked on Stack Exchange, mentions in research papers, activity on the GitHub open source code repository, and so on.
But programmers are turning away from many of these public expressions of interest. Rather than page through a book or search a website like Stack Exchange for answers to their questions, they’ll chat with an LLM like Claude or ChatGPT in a private conversation. And with an AI assistant like Cursor helping to write code, the need to pose questions in the first place is significantly decreased. For example, across the total set of languages evaluated in the TPL, the number of questions we saw posted per week on Stack Exchange in 2025 was just 22 percent of what it was in 2024.
With less signal in publicly available metrics, it becomes harder to track popularity across a broad range of languages. This existential problem for our rankings can be tackled by searching for new metrics, or trying to survey programmers—in all their variety—directly. However, an even more fundamental problem is looming in the wings.
Whether it’s a seasoned coder using an AI to handle the grunt work, or a neophyte vibe coding a complete web app, AI assistance means that programmers can concern themselves less and less with the particulars of any language. First details of syntax, then flow control and functions, and so on up the levels of how a program is put together—more and more is being left to the AI.
Although code-writing LLM’s are still very much a work in progress, as they take over an increasing share of the work, programmers inevitably shift from being the kind of people willing to fight religious wars over whether source code should be indented by typing tabs or spaces to people who care less and less about what language is used.
After all, the whole reason different computer languages exist is because given a particular challenge, it’s easier to express a solution in one language versus another. You wouldn’t control a washing machine using the R programming language, or conversely do a statistical analysis on large datasets using C.
But it is technically possible to do both. A human might tear their hair out doing it, but LLMs have about as much hair as they do sentience. As long as there’s enough training data, they’ll generate code for a given prompt in any language you want. In practical terms, this means using one—any one—of today’s most popular general purpose programming languages. In the same way most developers today don’t pay much attention to the instruction sets and other hardware idiosyncrasies of the CPUs that their code runs on, which language a program is vibe coded in ultimately becomes a minor detail.
Sure, there will always be some people who care, just as today there are nerds like me willing to debate the merits of writing for the Z80 versus the 6502 8-bit CPUs. But overall, the popularity of different computer languages could become as obscure a topic as the relative popularity of railway track gauges.
One obvious long-term consequence to this is that it will become harder for new languages to emerge. Previously, new languages could emerge from individuals or small teams evangelizing their approach to potential contributors and users. Presentations, papers, demos, sample code and tutorials seeded new developer ecosystems. A single well-written book, like Leo Brodie’s Starting Forth or Brian Kernighan and Dennis Ritchies’ The C Programming Language, could make an enormous difference to a language’s popularity.
But while a few samples and a tutorial can be enough material to jump-start adoption among programmers familiar with the ins and outs of hands-on coding, it’s not enough for today’s AIs. Humans build mental models that can extrapolate from relatively small amounts of data. LLMs rely on statistical probabilities, so the more data they can crunch, they better they are. Consequently programmers have noted that AIs give noticeably poorer results when trying to code in less-used languages.
There are research efforts to make LLMs more universal coders, but that doesn’t really help new languages get off the ground. Fundamentally new languages grow because they are scratching some itch a programmer has. That itch can be as small as being annoyed at semicolons having to be placed after every statement, or as large as a philosophical argument about the purpose of computation.
But if an AI is soothing our irritations with today’s languages, will any new ones ever reach the kind of critical mass needed to make an impact? Will the popularity of today’s languages remain frozen in time?
Before speculating further about the future, let’s touch base again where we are today. Modern high-level computer languages are really designed to do two things: create an abstraction layer that makes it easier to process data in a suitable fashion, and stop programmers from shooting themselves in the foot.
The first objective has been around since the days of Fortran and Cobol, aimed at processing scientific and business data respectively. The second objective emerged later, spurred in no small part by Edgar Dijkstra’s 1968 paper “Go To Statement Considered Harmful.” In this he argued for eliminating the ability for a programmer to make jumps to arbitrary points in their code. This restriction was to prevent so-called spaghetti code that makes it hard for a programmer to understand how a computer actually executes a given program. Instead, Dijkstra demanded that programmers bend to structural rules imposed by the language. Dijkstra’s argument ultimately won the day, and most modern languages do indeed minimize or eliminate Go Tos altogether in favor of structures like functions and other programmatic blocks.
These structures don’t exist at the level of the CPU. If you look at the instruction sets for Arm, x86, or RISC-V processors, the flow of a program is controlled by just three types of machine code instructions. These are conditional jumps, unconditional jumps, and jumps with a trace stored (so you can call a subroutine and return to where you started). In other words, it’s Go Tos all the way down. Similarly, strict data types designed to label and protect data from incorrect use dissolve into anonymous bits flowing in and out of memory.
So how much abstraction and anti-foot-shooting structure will a sufficiently-advanced coding AI really need? A hint comes from recent research in AI-assisted hardware design, such as Dall-EM, a generative AI developed at Princeton University used to create RF and electromagnetic filters. Designing these filters has always been something of a black art, involving the wrangling of complex electromagnetic fields as they swirl around little strips of metal. But Dall-EM can take in the desired inputs and outputs and spit out something that looks like a QR code. The results are something no human would ever design—but it works.
Similarly, could we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice? Do we need high-level languages at all in that future? True, this would turn programs into inscrutable black boxes, but they could still be divided into modular testable units for sanity and quality checks. And instead of trying to read or maintain source code, programmers would just tweak their prompts and generate software afresh.
What’s the role of the programmer in a future without source code? Architecture design and algorithm selection would remain vital skills—for example, should a pathfinding program use a classic approach like the A* algorithm, or instead should it try to implement a new method? How should a piece of software be interfaced with a larger system? How should new hardware be exploited? In this scenario, computer science degrees, with their emphasis on fundamentals over the details of programming languages, rise in value over coding boot camps.
Will there be a Top Programming Language in 2026? Right now, programming is going through the biggest transformation since compilers broke onto the scene in the early 1950s. Even if the predictions that much of AI is a bubble about to burst come true, the thing about tech bubbles is that there’s always some residual technology that survives. It’s likely that using LLMs to write and assist with code is something that’s going to stick. So we’re going to be spending the next 12 months figuring out what popularity means in this new age, and what metrics might be useful to measure. What do you think popularity should mean? What metrics do you think we should consider? Let us know in the comments below.> AI assistance means that programmers can concern themselves less and less with the particulars of any language.
Sure. Until we need to. Then we face some apparently tiny concern, which is actually deeply intricated with the rest of this whole mess, and we are ready for a ride in the rabbit hole.
> most developers today don’t pay much attention to the instruction sets and other hardware idiosyncrasies of the CPUs that their code runs on, which language a program is vibe coded in ultimately becomes a minor detail.
This can be very misguided from my part but I have the feeling they are two very different cases here. Ok, not everyone is a ffmpeg level champion who will thrive in code-golfing ASM til the last drop of cycle gain.
But there are also probably reasons why third-generation programming language lasted without any other subsequent proposal completely displacing them. It’s all about a tradeoff of expressiveness and precision. What we want to keep in the focus zone, and what we want to delegate to mostly uncontrolled details.
If to go faster we need to get rid of a transparent glasses, we will need very sound and solid alternative probes to report what’s going on ahead.
Take into account that this is posted on IEEE.
In my opinion, their target audience are scientists rather than programmers, and a scientist most often think of code as a tool to express his ideas (hence, perfect AI generated code is kind of a graal). The faster he can express them, even if the code is ugly, the better. He does not care to reuse the code later most of the time.
I have the hint that scientists and not programmers are the target audience as other things may trigger only one category but not the other, for example, they consider Arduino a language, This makes totally sense for scientists, as most of the ones using Arduino dont necessarily know C++, but are proud to be able to code in Arduino.
That’s a good point.
For a professional programmer, code and what it does is the object of study. Saying the programmer shouldn’t look at the code is very odd.
But reproducibility is famously a matter of some concern to scientists.
Sure, but their tools are complexity management tools: Hypotheses, experiments, empirical evidence, probabilities. To my knowledge, they deal far less with the determism programmers rely on. It's reproducible if you get similar results with the same probability.
If code is actually viewed as a tool to express ideas, making it easy to read and figure out should be a goal.
I like programming, I like clean code, so it's something I struggled with when I began research.
But actually, producing easy to read code when you don't have specifications, because you don't know yet if the idea will work, and you are discovering problems on that idea as you go doesn't lead to readable code naturally.
You refactor all the time, but then something that you misunderstood becomes a concern, and you need to refactorer again everything, and again and again.. You loose much time, and research is fast paced.
Scientists that spend too much time cleaning code often miss deadlines and deliverables that are actually what they need to produce. Nobody cares about their code, as when the idea is fully developed, other scientist will just rewrite a better software with full view of the problem. (some scientists rewrite their full software when everything is discovered)
I think a sensible goal would be easy to write code instead of easy to read for scientists.
But if you are iterating on code and using an LLM without even looking at the code, there's a reasonable chance that when you prompt "okay, now handle factor y also", you end up with code that handles factor y but also handles pre-existing factor x differently for no good reason. And scientific work is probably more likely than average programming to be numerics stuff where seemingly innocuous changes to how things are computed can have significant impacts due to floats being generally unfriendly.
Totally agree, in my experience we are far from having reliable research code based on prompts.
We are clearly not there yet, but I feel that the article is pushing in that direction, maybe to push research in that direction.
There was a long time ago an article from the creators of Mathematica or maple, I don't remember that said something similar. The question was: why do we learn about matrix operations at school, when (modern) tools are able to perform everything. We should teach at school matrix algebra and let students use the software (a little bit like using calculators). This would allow to make children learn more abstract thinking and test way more interesting ideas. (if someone has the reference I'm interested)
I feel the article follow the same lines. But with current tools.
(of course I'm skipping the fact that Mathematica is deterministic in doing algebra, and LLMs are far from it)
>> most developers today don’t pay much attention to the instruction sets and other hardware idiosyncrasies of the CPUs that their code runs on, which language a program is vibe coded in ultimately becomes a minor detail.
If it was even slightly true then we wouldn’t be generating language syntax at all, we’d be generating raw machine code for the chip architectures we want to support. Or even just distributing the prompts and letting an AI VM generate the target machine code later.
That may well happen one day, but we’re not even close right now
Also there’s so much patching in the kernel (for unix) to solve hardware bugs. And a lot of languages depends on C (with all its footguns) to probide that stable foundation. It’s all unseen work that are very important.
> This can be very misguided from my part but I have the feeling they are two very different cases here
They are indeed very different. If your compiler doesn't emit the right output for your architecture, or the highly optimized library you imported breaks on your hardware, you file a bug and, depending on the third party, have help in fixing the issue. Additionally, those types of issues are rare in popular libraries and languages unless you're pushing boundaries, which likely means you are knowledgeable enough to handle those type of edge cases anyway.
If your AI gives you the wrong answer to a question, or outputs incorrect code, it's entirely on you to figure it out. You can't reach out to OpenAI or Anthropic to help you fix the issue.
The former allows you to pretty safely remain ignorant. The latter does not.
Oh dear. Using AI for something you don't understand well is surely a recipe for disaster and should not be encouraged.
My take is that you should be using AI for exactly the same things that you would ask someone a random contractor to do for you, knowing that they won't be there to maintain it later.
On the other hand, one can see it as another layer of abstraction. Most programmers are not aware of how the assembly code generated from their programming language actually plays out, so they rely on the high-level language as an abstraction of machine code.
Now we have an additional layer of abstraction, where we can instruct an LLM in natural language to write the high-level code for us.
natural language -> high level programming language -> assembly
I'm not arguing whether this is good or bad, but I can see the bigger picture here.
Assembly is generally generated deterministically. LLM code is not.
Different compiler versions, target architectures, or optimization levels can generate substantially different assembly from the same high-level program. Determinism is thus very scoped, not absolute.
Also almost every software has know unknowns in terms of dependencies that gets permanently updated. No one can read all of its code. Hence, in real life if you compile on different systems (works on my machine) or again but after some time has passed (updates to compiler, os libs, packages) you will get a different checksum for your build with unchanged high level code that you have written. So in theory given perfect conditions you are right, but in practice it is not the case.
There are established benchmarks for code generation (such as HumanEval, MBPP, and CodeXGLUE). On these, LLMs demonstrate that given the same prompt, the vast majority of completions are consistent and pass unit tests. For many tasks, the same prompt will produce a passing solution over 99% of the time.
I would say yes there is a gap in determinism, but it's not as huge as one might think and it's getting closer as time progresses.
Your comment lacks so much context and nuance to ultimately be nonsense.
You absolutely can, and probably _should_, leverage AI to learn many things you don't understand at all.
Simple example: try picking up or learning a programming language like C with or without LLMs. With is going to be much more efficient. C is one of the languages that LLMs have seen the most, they are very, very good at it for learning purposes (also at bug hunting).
I have never learned as much about computing as in the last 7/8 months of using LLMs to assist me at summarizing, getting information, finding bugs, explaining concepts iteratively (99% of Software books are crap: poorly written and quickly outdated, often wrong), scanning git repositories for implementation details, etc.
You people keep committing the same mistake over and over: there's a million uses to LLMs, and instead of defining the context of what you're discussing about you conflate everything with vibe coding making ultimately your comments nonsense.
I've posted this before, but I think it will be a perennial comment and concern:
Excerpted from Tony Hoare's 1980 Turing Award speech, 'The Emperor's Old Clothes'... "At last, there breezed into my office the most senior manager of all, a general manager of our parent company, Andrew St. Johnston. I was surprised that he had even heard of me. "You know what went wrong?" he shouted--he always shouted-- "You let your programmers do things which you yourself do not understand." I stared in astonishment. He was obviously out of touch with present day realities. How could one person ever understand the whole of a modern software product like the Elliott 503 Mark II software system? I realized later that he was absolutely right; he had diagnosed the true cause of the problem and he had planted the seed of its later solution."
My interpretation is that whether shifting from delegation to programmers, or to compilers, or to LLMs, the invariant is that we will always have to understand the consequences of our choices, or suffer the consequences.
Applied to your specific example, yes, LLMs can be a good assistants for learning. I would add that triangulation against other sources and against empirical evidence is always necessary before one can trust that learning.
My context is that I have seen some colleagues try to make up for not having expertise with a particular technology by using LLMs and ultimately they have managed to waste their time and other people's time.
If you want to use LLMs for learning, that's altogether a different proposition.
I kinda knew what you meant, but I also feel it is important to provide the nuance and context.
seems like a significant skill/intelligence issue. someone i know made a web security/pentesting company without ANY prior knowledge in programming or security in general.
and his shit actually works by the way, topping leaderboards on hackerone and having a decent amount of clients.
your colleagues might be retarded or just don’t know how to use llms
Would you recognize a memory corruption bug when the LLM cheerfully reports that everything is perfect?
Would you understand why some code is less performant than it could be if you've never written and learned any C yourself? How would you know if the LLM output is gibberish/wrong?
They're not wrong; it's just not black-and-white. LLMs happen to sometimes generate what you want. Often times, for experienced programmers who can recognize good C code, the LLMs generate too much garbage for the tokens it costs.
I think some people are also arguing that some programmers ought to still be trained in and experienced with the fundamentals of computing. We shouldn't be abandoning that skill set completely. Some one will still need to know how the technology works.
Not sure how your comments relates to mine.
The parent I answered said you shouldn't use LLMs for things you don't understand while I advocate you should use them to help you learn.
You seem to describe very different use cases.
In any case, just to answer your (unrelated to mine) comment, here[1] you can see a video of one of the most skilled C developers on the planet finding very hard to spot bugs in the Redis codebase.
If all your arguments boil down to "lazy people are lazy and misuse LLMs" that's not a criticism of LLMs but of their lack of professionalism.
Humans are responsible for AI slop, not AI. Skilled developers are enhanced by such a great tool that they know how and when to use.
I was commenting on relying completely on the LLM when learning a language like C when you don’t have any prior understanding of C.
How do people using LLMs this way know that the generated code/text doesn’t contain errors or misrepresentations? How do they find out?
>The parent I answered said you shouldn't use LLMs for things you don't understand while I advocate you should use them to help you learn.
Someone else interpretation is not the author's saying. :)
Since the tone is so aggressive, it doesn't feel like it would be easy to build any constructive discussion on this ground.
Acting prudently is not blind rejection, the latter being not wiser than blind acceptance.
Would you mind sharing some of the ways that you leverage LLMs in your learning?
Some of mine:
* Converse with the LLM on deeper concepts
* use the `/explain` hook in VSCode for code snippets I'm struggling with
* Have it write blog-style series on a topic, replete with hyperlinks
I have gotten in some doom loops though when having it try to directly fix my code, often because I'm asking it to do something that is not feasible, and its sycophantic tendencies tend to amplify this. I basically stopped using agentic tools to implement solutions that use tech I'm not already comfortable with.
I've used it for summarization as well, but I often find that a summary of a man page or RFC is insufficient for deeper learning. It's great for getting my feet wet and showing me gaps in my understanding, but always end up having to read the spec at the end
Good at bug hunting?
Have you heard about how much AI slop has been submitted as "bug" - but always turn out to be not a bug - to the curl project?
All they've done is widened the plank over the abyss.
>> ...deeply intricated with... I think you invented a new phrase. And it's a good one!
It's hard to find good data sources for this, especially that StackOverflow is in decline[1].
IEEE's methodology[2] is sensible given what's possible, but the data sources are all flawed in some ways (that don't necessarily cancel each other out). The number of search results reported by Google is the most volatile indirect proxy signal. Search results include everything mentioning the query, without promising it being a fair representation of 2025. People using a language rarely refer to it literally as the "X programming language", and it's a stretch to count all publicity as a "top language" publicity.
TIOBE uses this method too, and has the audacity to display it as a popularity with two decimal places, but their historical data shows that the "popularity" of C has dropped by half over two years, and then doubled next year. Meanwhile, C didn't budge at all. This method has a +/- 50% error margin.
[1]: https://redmonk.com/rstephens/2023/12/14/language-rankings-u... [2]: https://spectrum.ieee.org/top-programming-languages-methodol...
By far the most useful and helpful is job ads: it literally defines the demand side of the programming language market.
Yes, that does not show us how much code is running out there, and some companies might have huge armies with very low churn and so the COBOL stacks in banks don’t show up, but I can’t think of a more useful and directly measurable way of understanding a languages real utility.
> the most useful and helpful is job ads
That would certainly be the case, if it were not for the fact that [fake job postings][1] are a thing.
[1]: https://globalnews.ca/news/10636759/fake-job-postings-warnin...
Is there a reason to believe this would skew results?
i.e. Are you assuming (insinuating) jobs for some programming languages are more likely to be fake
I would assume so. I expect there to be a lot of job postings looking for more "sexy" technologies to create the visage that those companies are growing and planning towards the future. And conversely I wouldn't expect any job postings of old "streets behind" technologies like COBOL to be fake, as they wouldn't help with such signalling.
Yes to your point, COBOL which ranks very low here is still fundamental to the infrastructure of several major industries, with some sources [1] reporting that it is used in:
43% of all banking systems.
95% of all US ATM transactions.
80% of all in-person credit card transactions.
96% of travel bookings.
This may very well dramatically change in the next few years with such an emphasis on enterprise AI tools to rewrite large COBOL repositories. [2]
[1] https://www.pcmag.com/articles/ibms-plan-to-update-cobol-wit...
[2] e.g. Blitzy https://paper.blitzy.com/blitzy_system_2_ai_platform_topping...
I can only speak to the two bigger German banks (i.e., Sparkasse and VR banks), but if you look at their outsourced development providers (Atruvia and Sparkasse Informatik), they're still offering incentives for their apprentices to learn COBOL, especially in the german dual apprenticeship programs which they can steer more easily than university courses. My wife has been doing COBOL for one of them since 2012, and the demand has never diminished. If anything, it's increased because experienced developers are retiring. They even pull some of these retired developers back for particularly challenging projects.
Sparkasse and VR aren't the two largest German banks. DB is at least double the size of Commerzbank which is again 100mn in assets ahead of DZ. I don't find it all that surprising that these small banks are still trying to keep their legacy systems alive, but it's not the case for the bigger boys. (Source: work for several of them)
You are right if we only talk about assets. Should've clarified I meant more in regards of retail customers and branches.
Oh, right, consumer banks. Yes I can imagine they're all extremely legacy bound. They're a very small percentage of banking, though.
Cobol is used in pretty much all enterprise legacy systems.
But "used in" doesn't mean that it's actively being developed by more then a tiny team for maintaining it.
As this graph we're commenting on is mostly talking about popularity/most used it's never going to rate higher, because for every one Cobol dev there are more then 100 Java devs employed by the same company
That's a pretty wild claim. What's legacy for you? I'd consider legacy e.g J2EE crap running on web[sphere|logic] as holding most of the points in that league table vs COBOL.
A legacy software to me is whatever the company that employs me says is said legacy software.
Pretty much every business I've worked at to date has had such legacy software, which was inevitably still used in some contexts.
It's not always obvious, because - following with the previous example numbers - only 1-2 Java devs will have to interact with the legacy software again, hence from the perspective of the remaining 98, Cobol doesn't exist anymore.
If they're talking about Cobol, it's usually systems originating before the early 90s that haven't been completely rewritten.
J2EE would be late 90s and 2000s.
In retail banking I'm sure that this could be true. Working in investment banking, I never saw a single COBOL application, or had to have my C++/Java/$MODERNLANGUAGE code interact with one.
Corp bank here, everyone has rumours about COBOL systems but no one I've ever spoke to has seen, interacted or has any other evidence these really exist anymore either.
Me neither.
But I asked for a bank statement from my old savings account a few years old and it took two weeks to print out, printed in monospace dot matrix.
Or the betting company that I was a customer that suspends betting everyday 6:30am for an hour for daily maintainance. Ironically, they would accept bets for football matches played at the time, but the system was shut down.
I suspect both are run on COBOL.
You haven’t seen or heard them because they are abstracted away by APIs, circuit breakers and proxies. Almost ALL banks, credit card companies, travel systems and other high throughput transaction systems run on mainframe that is written in COBOL.
I think the issue here is that people working in fintech don't seem to come across these systems much, if at all - if you know one specifically, please tell us.
It's still there at the accounting/backend level. Automated Financial Systems Level 3 and it's replacement Vision are commercial loan systems.
LVL3 is pure cobol. It has been recently deprecated but there are many banks who own the code and are still self hosting it, along with it's IBM green screen support.
Vision is a java front end in front of an updated cobol backend. When your reputation is based on your reliability and long term code stability, at what point do you risk making the conversion, versus training new developers to work on your system.
https://www.linkedin.com/jobs/view/business-analyst-afs-visi...
No, we are not afraid of our own systems. The idea that there is some fabled computer system which everyone is too scared to touch doesn’t exist (I work in payment processing). There are levels of controls way outside these systems which provide these safety nets (e.g settlement / reconciliation controls).
If the cobol is still there, it’s not due to risk. If anything, the cobol is a much higher operational risk than replacing it.
Analogously, GDSes like SABRE still ran on mainframes until very recently (c. 2023) [0]. SABRE was written in some combination of assembly and some kind of in-house dialect of PL/I, if I recall.
[0] https://www.theregister.com/2022/04/11/gds_gets_over_histori...
I worked briefly at a company that wrote applications that interacted with bank mainframes. Think end point bank teller systems and in branch customer/account management. They definitely do exist - every major bank has a mainframe written in (usually) cobol.
But it's very abstracted, part of our main product offering WAS abstracting it. On top of our ready to use applications, we offered APIs for higher-level data retrieval and manipulation. Under the hood, that orchestrates mainframe calls.
But even then that there could be more level of abstractions. Not every bank used screen-level mainframe access. Some used off the shelf mainframe abstractors like JxChange (yes, there's a market for this).
Fintech would be even more abstracted, I imagine. At that point you can only interact with the mainframe a few levels up, but it's still there. Out of sight.
Yeah when I worked in investment banking it was VBA and Java everywhere, never saw or heard of COBOL.
> Working in investment banking, I never saw a single COBOL application
What was the back office settlement or wire transfer system written in? There is a good chance that some part of them was written in COBOL. And while Bloomberg terminals are a vendor product, for a bloody long time, many of their screens had some COBOL.Also, lots of quantitative software at i-banks use LINPACK or BLAS, which use FORTRAN.
Well, I had a very badly specified project to write a library for our back office systems to do Swift payments from our C++ applications, via COM. There was no obvious COBOL involved, on either side, but it has to be said that the whole use case for the library was very murky. And it never worked, due to the lack of spec, not the languages.
First hand knowledge: ERGO and MunichRE both have a lot of cobol still doing the core business. You will most likely never run into the system because they just run batch jobs - sometimes configured via a “nice” web UI… you configure your job, submit and the next morning you have your report… that’s why you never actually see COBOL.
1. Not all roles are advertised. I've actually only been interviewed for two of the jobs I've ever had, both at the same place - my current employer because it's a public institution and so it always advertises and interviews for jobs even if it has an internal candidate who is likely to be a good fit. In fact the first of those jobs was basically my shape on purpose, another candidate was an equally good fit and they hired both of us.
Everywhere else people hired me because they knew who I was and what I could do and so in place of an "interview" maybe I grab lunch with some people I know and they explain what they want and I say yeah that sounds like a job I'd take and maybe suggest tweaks or focus changes. No shortlist of candidates, no tech interview, no tailoring a CV to match an advert. Nothing -> Lunch or Drinks -> Job offer.
So that can cause some distortion, especially for the niche languages where there are like six experts and you know them - an advert is futile there.
> measurable way of understanding a languages real utility
It feels like that metric misses "utility" and instead comes from a very American (or capitalistic maybe is better) mindset.
What about Max/MSP/Jitter? Huge impact in the music scene, probably has very small amount jobs available, so it'd rank fairly low while it's probably the top media/music language out there today. There are tons of languages that provide "the most utility for their domain" yet barely have any public job ads about them at all.
I think such metric would be useful to see the "employability of someone who knows that language" if anything, but probably more pain than gain to link "# of job ads" with "utility".
Thinking about how to measure this properly, why not just the moving average of daily downloads over 30 days from each repository?
… yes CI would be a lot of these downloads, but it’s at least a useful proxy
Yeah except job adverts have enormous lag behind what's actually popular. For example we used Rust quite a lot at my previous company but we didn't advertise for Rust developers at all.
Also then you're looking at which languages were popular in the past whereas the interesting stat is which languages are being used to start new projects.
Interesting might not be the same as useful.
If I'm trying to figure out which language to learn next, knowing what I can get paid for might be more useful, even if it's not that "interesting".
If lots of projects are starting up in Rust, but I can't get interviews because nobody is advertising, how useful is learning Rust?
Well, we have to define what a language's popularity mean. Because Rust is surely more 'hyped', than Java, but Java has at least an order of more developers/software written, etc.
So in which meaning do you use 'popular'?
Ideally we'd like to know both, as they tell us different things.
> we used Rust quite a lot at my previous company but we didn't advertise for Rust developers at all.
How did you find Rust developers when you needed to hire?Find developers. Tell them they get to use Rust. You now have Rust developers.
Find a CS program that teaches Rust and hire their graduates.
Existing C++ developers learned Rust.
Plus TIOBE had Perl enter the top 10 suddenly this year but I do not see any new developers. And Ada too! Where are all those Ada programmers?
Keeping 7 Ada vendors in business, one of the few areas where developers actually pay for tooling.
https://www.ghs.com/products/ada_optimizing_compilers.html
https://www.ptc.com/en/products/developer-tools/apexada
https://www.ddci.com/solutions/products/ddci-developer-suite...
http://www.irvine.com/tech.html
https://pkgstats.archlinux.de/packages?compare=ada,gcc,go,ja...
Ada seems pretty popular on Arch
This data is kinda worthless for popularity contests, since they may get picked up by aur packages, but this gives a solid insight into wich languages are foundational
I wish the same was available for other distros
You can do the same with docker images
curl -s https://hub.docker.com/v2/repositories/library/python/ | jq -r ".pull_count"
8244552364
curl -s https://hub.docker.com/v2/repositories/library/golang/ | jq -r ".pull_count"
2396145586
curl -s https://hub.docker.com/v2/repositories/library/perl/ | jq -r ".pull_count"
248786850
curl -s https://hub.docker.com/v2/repositories/library/rust/ | jq -r ".pull_count"
102699482
"Top Languages" doesn't mean "better" nor does it mean "best">>Perl enter the top 10 suddenly this year but I do not see any new developers.
Perl is almost as active as Javascript. And more useful than Python.
I write Perl to do all sorts of thing every week. Its strange its not in the top 5 list.
If you look at programming language list- Apart from Python, Java. Most are targeted to specific platforms(databases, browsers, embedded systems) or tech(SQL for database).
The general purpose programming languages today are still- Python, Java, and Perl. Make whatever of this you will.
Larry Wall at one point said, if you make something very specific to a use case(like awk, sed, php etc), it sort of naturally starts to come out of general purpose use.
Its just that Kotlin, Rust, Go, SQL, Julia, SQL, Javascript etc. These are not general purpose programming languages.
That was an active debate ... 15 years ago
Yep. And the sources are too often self-reinforcing and self-referential.
Use the "right"/better tool from the toolbox, the tool you know best, and/or the tool that the customer wants and/or makes the most money. This might include Ada[0] or COBOL[1]. Or FORTH[2] or Lua[3]. Popularity isn't a measure of much of anything apart from SEO.
0. https://www2.seas.gwu.edu/~mfeldman/ada-project-summary.html
1. https://theirstack.com/en/technology/cobol
2. https://dl.acm.org/doi/pdf/10.1145/360271.360272
3. https://www.freebsd.org/releases/12.0R/relnotes/#boot-loader
> It's hard to find good data sources for this
I like this:
https://madnight.github.io/githut/#/pull_requests/2024/1
It gives you a count of public repos on GitHub by language used, going back to 2012.
this is much better and aligns with the stackoverflow survey "what are you working on in your free time"
Perhaps the best source would now be the statistics of LLM queries, if they were available.
Edit: I see they raise this point at length themselves in TFA.
I totally expected JavaScript to get the 2nd spot but looks like TypeScript pulled the votes away. I personally consider JavaScript and TypeScript to be close enough for their numbers to be added up.
I agree, I think it makes most sense to add them up to be the true #2.
Then you should probably add kotlin and java together as well. They share the same purpose, use the same VM, usually live in the same project, have native compatibility, are used with the same frameworks, etc.
Especially considering Kotlin is used as a drop in replacement for Java in a lot of projects. Especially when using the type of frameworks often associated with Java (Spring, Quarkus, etc.).
Personally, I think statistics like this are biased towards the median of the past few decades and do not necessarily tell much about the future; other than that things apparently move very slowly and people are mostly conservative and stuck in their ways.
Cobol is still in that list. Right above Elixir, which apparently is a bit of a niche language. Kotlin has only been around for about 15 years, and the 1.0 release was actually only nine years ago. Java was released 30 years ago and it's been dominant in enterprise development for 25 years now. So, no surprise that Java is nearer to the top.
Python is surprising but it's been around for quite long and gained a lot of popularity outside the traditional computer science crowd. I know biochemists, physicists, etc. that all use python. And it's a great language for beginners obviously. It's not so much that people switched to python but that it is driving the growth of the overall programmer community. Most new programmers use python these days and that explains why it is the #1.
Javascript has had a virtual monopoly on basically anything that runs in a browser, which is of course the most popular way to distribute code these days. Especially since plugins were deprecated and things like applets, flash, etc. disappeared around fifteen years ago. Anything that ran on the web was either written in Javascript; or transpiled/compiled to it. WASM is starting to change that but it's early days.
What the past 25 years tell us is that things definitely change. But very slowly. C++ still outranks Javascript. That's because it's mostly browsers where it is used. It's a lot less popular for other things.
I like Kotlin, so I'm biased. But it's obviously not the most popular thing by a long shot. But popular doesn't mean good. I actually like python for small unimportant things. But I reach for Kotlin if I need to do it properly. I used to reach for Java. But Kotlin simply became the better tool for the job; at least for me. I even prefer it over typescript and I do occasionally use it for web frontend development. The transpiler is pretty good. And there's a WASM compiler too and Compose for WASM just entered beta. Kotlin seems future proof and it seems to be growing into wider adoption. There are a few million programmers around by Jetbrains counts. It's not nothing.
C++ is still very popular where you need raw perfomance but not so raw as C. Especially with the fact that python is used as a more user friendly interface.
True. Pretty much every Kotlin recruitment message I got was because of past Java experience, so the job market seems to agree with you.
JS is a valid TS, Kotlin is not a valid Java (only at a bytecode level, but then you might as well combine all JVM languages).
But TS is not valid JS and nobody uses TS because they can write JS in a file with a different extension. You also get 0 benefit from running `tsc` on a JS file. You could argue that C is valid C++ so there's no reason to discern them either.
> You could argue that C is valid C++ so there's no reason to discern them either.
Only up to C90, and even modern C++ doesn’t fully implement modern C.
JS is a valid TS.
> Kotlin is not a valid Java
But you can easily have both of them in the same project (e.g. when slowly moving to kotlin) and have them interop.
You can also easily have Objective-C, C, and C++ in the same Swift project and have them interop. That’s a feature of Swift. But adding their numbers together wouldn’t make sense.
It doesn’t make it valid Java. You can paste JS verbatim to TS file and it will work.
Java devs at large are generally not excited about writing Scala.
I like where Scala 3 is headed (finally). Martin seems to realize that the simpler "direct" programming model is better and that will make the language more attractive to regular people who aren't FP purists. Though it does feel like it might be too little too late.
Doesn't really bring benefit. With Java you are more quickly useful in C++ and can write server apps without fuss. Very little benefit in using a different language when Java literally does the same and is used everywhere else.
And Clojure and Scala. So really Clojure is number 2. :-)
Don't forget Jruby and Groovy.
Touche!
You could make a separate graph with "platform" or "language family" so you do js/ts, jvm, .net/clr, C/C++ etc.
That one is perhaps more interesting from an industry/jobs trend perspective whereas the TS vs JS trend is also interesting on its own.
JS is a valid TS, so there’s no reason to discern them.
And #1 on the jobs chart
I agree. These should be combined. Also, what is Arduino doing on this list?
Agreed. There’s a few consolidations I’d prefer, including BEAM-based languages as one.
Then also app up Java&Kotlin and C&C++.
Ooh, then JS&TS are not number two!