Why Ruby on Rails still matters

2025-02-2117:46524458www.contraption.co

An old tool endures in a Next.js world

I found vinyl records from my late grandfather recently. It struck me how this media from the previous millennium played without issues. Vinyl represented a key shift in music distribution - it made printing and sharing sounds accessible, establishing a standard that persists. While audio sharing methods evolved, the original approaches remain functional. In our increasingly complex world, many people return to vinyl because it offers simplicity, stability, and longevity.

Amidst the constant changes of web technologies, it's easy to forget that old websites continue to work just fine, too. A plaintext website from the 1990s loads in modern browsers just as it did then.

Websites gained additional capabilities over time - CSS for styling, JavaScript for interactivity, and websockets for real-time updates. Yet their foundation remains based on pages, forms, and sessions.

Ruby on Rails emerged twenty years ago as a unified approach to building interactive, database-powered web applications. It became the foundation for numerous successful companies - Airbnb, Shopify, Github, Instacart, Gusto, Square, and others. Probably a trillion dollars worth of businesses run on Ruby on Rails today.

Effective tools simplify complex tasks through abstraction. Cars illustrate this - driving once required understanding fuel systems, timing, and clutch mechanics. Now most drivers don't know how many gears their car has.

Ruby on Rails packaged web development best practices into an approachable toolkit: login sessions, CSRF protection, database ORMs. This abstraction lets developers focus on building products rather than technical tedium. Today, most developers don't know the contents of their login cookie, even though it powers their application.

Rails succeeded by staying close to web fundamentals. It uses HTML primitives like pages, input fields, and forms. As a backend-focused framework, it concentrates on data validation, processing, and storage, making form creation straightforward.

JavaScript gained prominence after Rails' initial success. The last ten years of web development advancements basically gave websites the functionality of an iPhone app, while still being a website.

Next.js now serves as the most common tool for building a startup. Its frontend-focused framework enables dynamic loading states, server-side rendering, and complex component building. Another trillion dollars worth of companies is being built on Next.js, and these web apps are faster and more polished than what could have been built on Ruby on Rails.

Next.js and its underlying technology React, drive much of modern web innovation. Basically every mainstream consumer product you love runs on this stack - like Spotify, Netflix, Facebook, and Stripe. It allows developers to create quick, customized, and interactive products by pushing web standards to their limits.

Amid the rapid adoption of Next.js, Rails has continued to maintain relevance. New projects - from independent projects to AI companies - are still choosing it for new projects.

The truth is that the new wave of Javascript web frameworks like Next.js has made it harder, not easier, to build web apps. These tools give developers more capabilities - dynamic data rendering and real-time interactions. But, the cost of this additional functionality is less abstraction.

Next.js really competes with native iPhone apps. Previously, startups needed iPhone apps for refined user experiences, and building iPhone apps was a complex process that often requires multiple developers with different specialities. Next.js enabled websites to approach iPhone app quality. Many of today's most polished products, like Linear and ChatGPT launched as Next.js applications, and treated mobile apps as secondary priorities.

Rails evolved over the two decades since its launch, adding JavaScript interactivity, backend job management, loading states, and real-time application tools. It even supports mobile app development. As application patterns evolved, Rails incorporated them as framework features while maintaining its HTML-based foundation.

Most web applications continue to be forms on pages - job boards, vendor systems, and ecommerce stores. Next.js can build these, but requires additional development time compared to Rails. Using cutting-edge frameworks introduces instability through frequent updates, new libraries, and unexpected issues. Next.js applications often rely on a multitude multiple third-party services like Vercel, Resend, and Temporal that introduce platform risk.

Developers choose Rails today because, 20 years later, it remains the most simple and abstracted way to build a web application. Solo developers can create dynamic, real-time web applications independently (as I did with Booklet and Postcard). Enterprise teams use it to build applications with multiple models and access controls, supported by thorough testing. Rails helps small teams work faster while reducing development and maintenance costs.

I have experience with both frameworks. I built Find AI, a venture-funded AI startup, using Rails. As a search engine, it benefited from Rails' ability to handle complex backend operations with simple frontend needs. Today I'm working on Chroma Cloud, designed for exploring and managing large datasets, and Next.js powers its advanced interactions and data loading requirements.

Rails has started to show its age amid with the current wave of AI-powered applications. It struggles with LLM text streaming, parallel processing in Ruby, and lacks strong typing for AI coding tools. Despite these constraints, it remains effective.

Vinyl changed music by broadening access. Sound quality improved over time, but earlier formats retain value. The Köln Concert maintains its popularity regardless of bit rate. In the technology world, we can enjoy the polish of Linear while appreciating that Craiglist's 90s-era website probably makes more money.

At the end of the day, users care about product utility more than implementation details. Polish fades, but utility persists.


Read the original article

Comments

  • By philip1209 2025-02-2119:0124 reply

    For the hundreds of people reading this article right now - you might be amused to know that you're accessing it from a mac mini on my desk:

    https://www.contraption.co/a-mini-data-center/

    (The CPU load from this is pretty negligible).

    • By asdfman123 2025-02-2119:503 reply

      What is HackerNews but a system to stress test everyone's hobby websites?

      • By mey 2025-02-224:306 reply

        Before this Digg, before that Slashdot.

        What else am I missing?

      • By atum47 2025-02-2123:22

        Every time I share a project I provide two links, one for my vps and another one for GitHub pages. Usually my projects run on the client, so I have never experienced the hug of death myself.

      • By ash-ali 2025-02-222:49

        I absolutely love this comment <3

    • By bluGill 2025-02-2119:212 reply

      back in my day kid we used to serve far more users from 40mhz CPUs. The only interesting part is that today you can get pipes fast enough to do this in your house, while back then dialup was all we could afford ($1000/month to get into the 1 megabit/second range, ISDN and DSL came soon after and were nice).

      Of course back then we didn't use dynamic anything, a static web page worked.

      Now get off my lawn!

      • By vidarh 2025-02-2120:241 reply

        My first company website was served of a 120MHz Pentium that also served as the login server where 5 of us ran our X clients (with the X servers on 486's with 16MB RAM)...

        And it wasn't static: We because peoples connections were mostly so slow, we used a CGI that shelled out to ping to estimate connection speed, and return either a static image (if you were on a dialup) or a fancy animated gif if you were on anything faster.

        (the ping-test was obviously not reliable - if you were visiting from somewhere with high latency, you'd get the low bandwidth version too, no matter how high your throughput was - but that was rare enough; it worked surprisingly well)

        • By JohnBooty 2025-02-2217:31

          I love that so much. You just don't see wacky solutions like this any more. I guess it's a good thing, but this career has gotten a hell of a lot less fun and interesting.

      • By helpfulContrib 2025-02-2212:352 reply

        I used to host 3,000 active daily users from a 33mhz 486 with a 56k modem.

        Thousands and thousands of lines of quality conversation, interaction, humanity.

        To be honest, I kind of miss those days.

        I love to think that the web of the future is just going to be everyones' mac Mini or whatever.

        Big Data™ has always irked me, frankly.

        • By larodi 2025-02-2216:111 reply

          Everyone moved too fast into the future, and this is perhaps not that good. The whole ASCII and 90s/cyberpunk nostalgia is being a major cue.

        • By rbanffy 2025-02-2215:12

          We need something that’s small, cheap, plugs into a power outlet (or a PoE port), and lets anyone serve their personal little node of their distributed social network.

          I started thinking about that around an implementation that could run under Google’s App Engine free tier, but never completed it.

    • By trinix912 2025-02-2119:231 reply

      I like that you're pointing out application longevity in the linked article. It seems that new SaaS apps appear and disappear daily as cloud hosting isn't cheap (especially for indie hackers). I'd much rather sign up for an app that I knew wouldn't randomly disappear in a couple of months when the cloud bills surpass the profits.

      • By cultofmetatron 2025-02-2120:174 reply

        I took a startup from zero to 100k MRR as of last month over the last 5 years. I can tell you that cloud billing is the least of your concerns if you pay even the cursory attention to writing good queries and adding indexes in the right places. The real issue is the number of developers who never bother to learn how to structure data in a database for their use case. properly done, you can easily support thousands of paying users on a single write server.

        • By goosejuice 2025-02-225:56

          A bit hand wavy. It obviously depends on the business and what "least of concerns" entails.

          In most cases businesses justify the cost of managed databases for less risk of downtime. A HA postgres server on crunchy can cost over $500/mo for a measly 4vCPU.

          I would agree that it's the least of concerns but for a different reason. Spending all your time optimizing for optimal performance (assuming sensible indexing for what you have) by continuously redesigning your DB structure when you don't even know what your company will be doing next year isn't worth the time for a few hundred a month you might save.

        • By nlitened 2025-02-228:40

          > I can tell you that cloud billing is the least of your concerns if you pay even the cursory attention to writing good queries and adding indexes in the right places.

          I read this as "in building your startup, you should be paranoid about team members never making mistakes". I really try to read otherwise, but can't.

        • By DeathArrow 2025-02-228:24

          I use CQRS with /dev/null for writes and /dev/random for reads. It's web scale, it's cheap and it's fast.

        • By giantrobot 2025-02-225:541 reply

          What? No no, to be fast you need the whole database only in RAM! And SQL is hard so just make it a giant KV store. Schemas are also hard so all values are just amorphous JSON blobs. Might as well store images in the database too. Since it's RAM it'll be so fast!

          /s

    • By aurareturn 2025-02-223:111 reply

      That's amazing. Mac Mini is very efficient and is a great little home server. Idles at 3-4w total for the entire machine. Plus, the M4 is a beast of a CPU. It might even be possible to serve a small LLM model like a 3b model on it over the internet.

      • By philip1209 2025-02-223:181 reply

        Yeah, the mac minis can have up to 64GB of ram which would support some usable models. However, I accidentally got one with 24gb of ram, and my apps already use 12gbs. So, perhaps I'll get a second box just for LLMs!

        • By aurareturn 2025-02-223:241 reply

          A small model like 1B or 3B should be ok with 16GB. I was thinking in the name of savings, you can just use the same machine.

          It's a cool project. I might do it too. I have an M4 Mini sitting on my desk that I got for $550.

    • By bmelton 2025-02-222:422 reply

      I've been thinking about that article for the past week so much that I've been looking at $250 Ryzen 7 5700U 16/512/2.5G Ace Magician NUCs to move some of my properties to. They're known to be shipping spyware on their Windows machines, but my thought was that I'd get 3 of them, clear them out with Debian, and set them up as a k8s cluster and have enough horsepower to handle postgres at scale.

      • By ww520 2025-02-223:022 reply

        Get NUC, or one of those refurbished Dell or HP mini PCs. They have plenty of CPU power, consume very little idle power, and friendly to Linux.

        • By xp84 2025-02-226:161 reply

          I have been wildly happy with my EliteDesk mini pcs. Mine are the “G5” generation which cost like $60-150 on eBay with varying specs, obviously newer generations have better specs but for my “homelab” needs these have been great. I even put a discrete GPU ($60) in my AMD one for a great little Minecraft machine for playing with the kid.

          • By beAbU 2025-02-229:00

            I have a g5 elitedesk small form factor (about the size of a largr cereal box, not a book) pc, thats been runnimg my by media server and torrent download services for years now. It has a plucky little 10th gen i3 or something, and it has been more than enough. Can real time transcode 4K movies! Dead quiet and sips electricity. Uptime is on average about 8-10 months.

      • By philip1209 2025-02-222:431 reply

        Glad it resonated with you!

        If you're considering k8s, take a look at Kamal (also from DHH): https://kamal-deploy.org/

        I think it makes more sense for small clusters.

        • By bmelton 2025-02-2213:251 reply

          It probably does! Kamal/MRSK has been on the roadmap for awhile. I have deliberately endeavored to keep the existing k8s setup as minimal as possible, and it's still grown to almost unruly. That said, it works well enough across the (surprisingly power efficient) Dell C1100s in the basement, so it'd take a migration to justify, which is of course the last thing you can justify this with.

          • By antonvs 2025-02-2219:181 reply

            Which k8s distribution are you using? I’ve been using k3s on everything from individual home machines or cloud VMs, to small on-premise customer clusters for customers who don’t already have their own k8s clusters.

            I don’t find it approaching unruliness, quite the opposite really.

            • By bmelton 2025-02-2316:03

              Vanilla, CNCF. K3S is tempting.

    • By tempest_ 2025-02-2119:163 reply

      Presumably CF is doing most of the work if the page doesnt actually change all that much?

      • By boogieup 2025-02-223:101 reply

        Nobody's actually doing work because serving web pages is cheap.

        • By fmbb 2025-02-2213:05

          Is it really cheap through ruby?

      • By kevincox 2025-02-2221:11

        It does look like the main article isn't actually cached by Cloudflare. But most of the assets are. So it is definitely helping but not taking the entire load.

      • By philip1209 2025-02-2119:51

        Yeah, but there's Plausible Analytics self-hosted on the mac mini that's getting more of the load right now.

    • By TomK32 2025-02-2119:351 reply

      It's fun to host at home, I run docker on alpine VMs on two proxmox machines. Yeah, different docker machines for each user or use-case look complicated but it works fine and I can mount nfs or samba mounts as needed. Only thing I have on the cloud is a small hetzner server which I mostly use as a nginx proxy and iptables is great for that minecraft VM.

      Why did you go for Cloudfare tunnel instead of wireguard?

      • By nemothekid 2025-02-2120:233 reply

        Cloudflare Tunnel provides you a publicly routable address for free. With wireguard you would still need a VM somewhere, and if you are hosting your own VM, then whats the point?

        • By boogieup 2025-02-223:103 reply

          Not making Cloudflare more of a central point of failure for the internet? We hosted web pages before they MITM'd the entire web.

          • By giantrobot 2025-02-226:19

            > We hosted web pages before they MITM'd the entire web.

            We also hosted web pages before the average script kiddie could run tens of Gbps DDoS on sites for the lolz. And before ISPs used CGNAT making direct inbound connections impossible.

          • By miyuru 2025-02-2216:50

            Public IPv4 address exhausted and NAT happened.

            Even having IPv6 is not a proper solution because of laggy ISPs(currently reaching ~50%) and the even the ISPs who deploy, do not deploy it properly. (dynamic prefixes or inbound blocked IPv6)

            Add to the mix that lot of people does not understand IPv6, internet became more centralized and will keep doing so for the foreseeable future.

          • By october8140 2025-02-223:371 reply

            I like how they have amazing great free services and people are upset so many people use it.

            • By adamrezich 2025-02-2210:361 reply

              That's what we all said about various Google products many years ago, too.

              • By october8140 2025-02-249:43

                Then you just switch to something else. What’s the problem? You’re not locked in.

        • By TomK32 2025-02-228:13

          It's a small cost of $4.50/month and allows me a lot more control. In regards to wireguard, that one VM I pay for is the central wireguard node for all sorts of devices that I use, allowing me to securely access home services when I'm not at home. There are services you don't want to expose directly via a Cloudfare Tunnel.

        • By dingi 2025-02-2120:42

          But you are using someone else’s VM. You just don’t pay for it.

    • By firefoxd 2025-02-2217:34

      I've tried to do so with a $9 pocket pc, but ended up frying it by accidently short-circuiting it.

      I wrote a visualizer for the traffic that I think people will appreciate [1]. I will post it next month once I add it on github. It was fun to watch an article that went #1 on HN.

      [1]: https://ibb.co/cXT3VNDR

    • By adamtaylor_13 2025-02-224:022 reply

      I actually read that blog post too last week (or the week before?) and I’m genuinely considering this.

      Render is crazy expensive for blog sites and hobby apps.

    • By jonwinstanley 2025-02-2119:451 reply

      Weirdly, that tower in the photo is also on the front page of HN right now

      https://vincentwoo.com/3d/sutro_tower/

    • By your_challenger 2025-02-223:012 reply

      Is cloudflare tunnels really this free to support thousands of internet requests?

      I run a windows server at my office where we connect to it using RDP from multiple locations. If I could instead buy the hardware and use cloudflare tunnels to let my team RDP to it then it would save me a lot of money. I could recoup my hardware cost in less than a year. Would this be possible?

      (I wouldn't mind paying for cloudflare tunnels / zero trust. It just should be much smaller than the monthly payment I make to Microsoft)

      • By philip1209 2025-02-223:20

        Yup. Cloudflare's typical proxy already handles massive amounts of traffic, so I expect that the marginal cost of this reverse proxy isn't that high.

        I do think Cloudflare has proven itself to be very developer/indie-friendly. One of the only tech unicorns that really doesn't impose its morality on customers.

      • By nemothekid 2025-02-223:09

        I used Cloudflare Tunnels for a project that had hundreds of tunnels did roughly 10GB/day of traffic entirely for free. The project has since moved to Cloudflare Enterprise, where the project pays the opposite of free, but was completely expected as the project grew.

        I'm pretty sure Tunnels supports RDP and if you don't use a ton of bandwidth (probably under a 1TB/mo), Cloudflare probably won't bother you.

    • By peterhunt 2025-02-2119:534 reply

      Now do it without Cloudflare :)

      • By mmcnl 2025-02-2122:36

        I wrote a blog post that generated a lot of traffic on HackerNews last year when it briefly was on #1 here. My blog was (and still is) hosted on a 9-year old Dell Latitude E7250 with Intel Core i5-6300U processor. The server held up fine with ~350 concurrent readers at its peak. It was actually my fiber router that had trouble keeping up. But even though things got a bit slow, it held up fine, without Cloudflare or anything fancy.

      • By philip1209 2025-02-2119:571 reply

        Perhaps some day.

        My shorter-term goal is to switch my home internet to Starlink, so that all requests bounce off a satellite before landing at my desk.

        • By nofunsir 2025-02-2120:111 reply

          Except Starlink uses CGNAT, which means you need some external SSHD port forwarding at least.

          • By nemothekid 2025-02-2120:22

            He could keep using Cloudflare Tunnel, but then he's still using Cloudflare

      • By dingi 2025-02-2120:39

        Been using a setup following this for quite a while. Nginx reverse proxy on a cheap VPS with a wireguard tunnel to home.

      • By Eikon 2025-02-2120:052 reply

        Trivial, even for a high traffic website to be served from a fiber connection.

        • By alabastervlog 2025-02-220:282 reply

          Computers are stupid good at serving files over http.

          I’ve served (much) greater-than-HN traffic from a machine probably weaker than that mini. A good bit of it dynamic. You just gotta let actual web servers (apache2 in that case) serve real files as much as possible, and use memory cache to keep db load under control.

          I’m not even that good. Sites fall over largely because nobody even tried to make them efficient.

          • By xp84 2025-02-226:23

            I’m reminded of a site I was called in to help rescue during the pandemic. It was a site that was getting a lot higher traffic (maybe 2-3x) than they were used to, a Rails app on Heroku. These guys were forced to upgrade to the highest postgres that Heroku offered - which was either $5k or $10k a month, I forget - for not that many concurrent users. Turns out that just hitting a random piece of content page (a GET) triggered so many writes that it was just overwhelming the DB when they got that much traffic. They were smart developers too, just nobody ever told them that a very cacheable GET on a resource shouldn’t have blocking activities other than what’s needed, or trigger any high-priority DB writes.

          • By boogieup 2025-02-223:11

            And nobody knows how stuff works at the web server level anymore... The C10K problem was solved a long time ago. Now it's just embarrassing.

        • By philip1209 2025-02-2122:061 reply

          If only my part of SF had fiber service. #1 city for tech, but I still have to rely on Comcast.

          • By Eikon 2025-02-2122:127 reply

            Sounds weird to read that from Western Europe where even the most rural places have fiber!

            I understand that the USA is big, but no fiber in SF?

            • By xp84 2025-02-226:271 reply

              SF is mostly served by AT&T, who abandoned any pretense of upgrading their decrepit copper 20 years ago, and Comcast, whose motto is “whatcha gonna do, go get DSL?”

              AT&T has put fiber out in little patches, but only in deals with a guaranteed immediate ROI, so it would mean brand new buildings, where they know everyone will sign up, or deals like my old apartment, where they got their service included in the HOA fee, so 100% adoption rate guaranteed! AT&T loves not competing for business.

              Sure, others have been able to painstakingly roll out fiber in some places, but it costs millions of dollars to string fiber on each street and to get it to buildings.

              • By unclebucknasty 2025-02-228:07

                Lived in an older neighborhood in Georgia a couple years back. A new neighborhood across the street had it (AT&T), but we didn't.

                Caught an AT&T tech in the field one day, and he claimed that if 8 (or 10—memory's a little fuzzy) people in the neighborhood requested it, they'd bring it in.

                I never did test it, but thought it interesting that they'd do it for that low a number. Of course, it may have been because it was already in the area.

                Still, may be worth the ask for those who don't already have it.

            • By ekianjo 2025-02-226:21

              > where even the most rural places have fiber!

              No need for the hyperbole. I know for a fact that you don't get fiber in the remote countryside of France

            • By fragmede 2025-02-220:25

              https://bestneighborhood.org/fiber-tv-and-internet-san-franc... has a detailed map, by provider, if you wanna dig into the gory details, but there is fiber, just not everywhere.

            • By deaddodo 2025-02-2215:221 reply

              In the US, it’s not about money or demand. The more entrenched cities (especially in California, for some historic reasons/legislation) tend to have a much more difficult time getting fiber installed. It all comes down to bureaucracy and NIMBYism.

              • By bcoates 2025-02-232:241 reply

                It's just SF, there's fiber-to-the-pole or better in most of the LA area, even if the only last-foot service is DSL or cable

                • By deaddodo 2025-02-244:57

                  Sure, but it took much longer for it to roll out in LA than it should have, and even then (as you pointed out) the furthest they could get was the pole in most cases. FTH is mostly reserved to the more suburban areas (the Valleys) and the independent cities.

            • By jnathsf 2025-02-220:23

              we have fiber in half of SF via Sonic - where there are overhead wires. The other half of SF has its utilities underground making economics more difficult.

            • By philip1209 2025-02-2123:55

              Not where I am

            • By boogieup 2025-02-223:121 reply

              [flagged]

    • By AlchemistCamp 2025-02-224:061 reply

      A mac mini is pretty beefy for hosting a blog!

      I’ve had a number of database-driven sites hosted on $5/month VPS that have been on the front page here with minimal cpu or memory load.

      • By philip1209 2025-02-225:59

        It's hosting a variety of apps - blog (Ghost), plausible analytics, metabase, and soon 3 Rails apps. It's unfortunately running Postgres, MySQL, and Clickhouse.

    • By mattgreenrocks 2025-02-2213:201 reply

      Love all the projects you have going. Do you use a template for the landing pages? Or DIY? They look great!

    • By psnehanshu 2025-02-227:261 reply

      I see you're serving a GTS certificate. Does GCP allow you to download TLS certificates? I honestly didn't know. I thought just like AWS, you get them only when using their services like load balancers, app runners etc.

    • By rapind 2025-02-2119:16

      Pretty cool. Wouldn't work for me as my ISP is horrendously unreliable (Rogers in Canada, I swear they bounce their network nightly), but I might consider colocating a mac mini at a datacenter.

    • By raitom 2025-02-2216:232 reply

      What kind of Mac mini do you use (cpu and ram)? I’m really interested in making the same thing but I’m not sure if the base M4 mini is enough with just 16gb of ram.

      • By philip1209 2025-02-2217:31

        I have the m4 pro with 24gb of ram. I wish I had increased the ram to 64gb - I'm at 50% utilization already.

        The CPU is more powerful than I need. I didn't pass 8% utilization in the last 24 hours as HN slammed the box.

      • By jon-wood 2025-02-2217:27

        Depends what you're doing. If it's literally just serving up some static web pages though that is hilariously over specified, you're going to be constrained by your internet connection long before that Mac Mini starts breaking a sweat.

    • By boogieup 2025-02-223:091 reply

      That makes sense, because serving a web page to a few hundred people is not a computationally expensive problem. :3

      • By philip1209 2025-02-223:211 reply

        I self-host analytics on the box (Plausible), which is using more resources than the website. There are a few apps on there, too.

        • By ekianjo 2025-02-226:22

          Plausible is hardy compute intensive

    • By dakiol 2025-02-2210:101 reply

      How much does it cost to keep the mac mini on for a month? I’ve been thinking doing the same.

      • By philip1209 2025-02-2220:32

        Not sure - I could hook up an energy monitor and run some math. But, I don't think the marginal internet cost or electricity cost are really much.

    • By cl0ckt0wer 2025-02-2221:28

      Do mac minis have ECC ram?

    • By _vaporwave_ 2025-02-2119:171 reply

      Very cool! Do you have a contingency in place for things like power outages?

      • By philip1209 2025-02-2119:521 reply

        Not really . . . Cloudflare Always Online, mostly.

        I had 2m35s of downtime due to power outages this week.

        • By firecall 2025-02-2215:221 reply

          A MacBook Air solves this problem very nicely!

          Not only does is have a built in UPS, but also comes with a screen, keyboard and trackpad for you need to do admin tasks physically att the console!

          • By philip1209 2025-02-2220:341 reply

            Yeah, I had considered this! But, then I'd need a UPS on my modem and wifi, and at that point it seemed overkill.

            • By firecall 2025-02-2222:34

              Yes, totally.

              Although you could use a backup USB cellular modem, plugged directly into the Mac!

              At some I imagine I'll inherit or acquire a cheap older M1 Macbook Air, which will be perfect!

              I also have whole home battery backup and solar, so I technically have a UPS for everything!

              Where I live the power tends to go out from time to time!

              I monkeyed around with cheap ex-lease Dell Micro PCs with Intel 8th Gen to 11th Gen CPUs. But they not as performant as I'd like, and once you've experienced modern CPUs like Apple's M series, you dont really want to go back!

    • By renegade-otter 2025-02-2210:17

      Hosting from home is fun, I guess, but it actually was a money-saving exercise before the cloud. I've done it.

      Now, however, what is the point? To learn server config? I am running my blog with GitHub pages. A couple of posts made it to the top of HN, and I never had to worry.

      Always bewilders me when some sites here go down under load. I mean, where are they hosting it that a static page in 2020s has performance issues?

    • By fsndz 2025-02-227:001 reply

      the article is AI-generated isn't it ?

      • By philip1209 2025-02-227:241 reply

        Nope

        • By fsndz 2025-02-228:043 reply

          Lol, by just reading I knew it was. Then I used an AI detection tool and it says 100% sure it is AI-generated. You know how hard it is to get 100% sure it is AI-generated ?

          • By berdario 2025-02-2211:581 reply

            Most "AI detection tools" are just the equivalent of a Magic 8 ball.

            In fact, most of them are just implemented by feeding an LLM the text, and asking "is it AI generated?". You cannot trust that answer any more than any other LLM hallucination. LLMs don't have a magic ability to recognise their own output.

            Even if your "detection tool" was using exactly the same model, at the same exact version... unless the generation was done with 0 temperature, you just wouldn't be able to confirm that the tool would actually generate the same text that you suspect of being LLM-generated. And even then, you'd need to know exactly the input tokens (including the prompt) used.

            Currently, the only solution is watermarking, like what Deepmind created:

            https://deepmind.google/discover/blog/watermarking-ai-genera...

            but even that, it requires cooperation from all the LLM vendors. There's always going to be one (maybe self-hosted) LLM out there which won't play ball.

            If you're going to accuse someone of pushing LLM-generated content, don't hide behind "computer said so", not without clearly qualifying what kind of detection technique and which "detection tool" you used.

            • By fsndz 2025-02-2214:482 reply

              I am starting to believe this is a lie spread by AI companies because if AI-slop starts to be detected at scale, it kills their primary use case. True, AI detection tools are not perfect, like any classification algo, they don't have a 100% accuracy. But it does not mean they are useless. They give useful probabilities. If AI detectors are so wrong, how do you explain that passing AI generated text on gptzero and it gets it all the time, same when I pass human written content it recognises it as such almost 99% of the time.

              • By titmouse 2025-02-2216:131 reply

                It's the false positives that make it useless. Even if it's generally very good at detecting AI, the fact that it can and does throw false positives (and pretty frequently) means that nothing it says means anything.

                • By fsndz 2025-02-2221:241 reply

                  lol with that kind of reasoning, nobody should use statistics or any kind of machine learning model...

                  • By foldr 2025-02-2310:43

                    That’s actually not a bad rule of thumb!

              • By berdario 2025-02-2316:25

                > They give useful probabilities

                Yes, compared to an all-or-nothing approach, it's better to be upfront about the uncertainty, especially if the tool surfaces the probability by-sentence.

                But how are those probabilities computed? You mention gptzero, but https://gptzero.me/technology doesn't clarify at all how it works. They link papers using GPTZero (i.e. from other researchers), e.g. https://arxiv.org/pdf/2310.13606

                And these very same papers highlight how everything is still unknown

                > Despite their wide use and support of non-English languages, the extent of their zero- shot multilingual and cross-lingual proficiency in detecting MGT remains unknown. The training methodologies, weight parameters, and the spe- cific data used for these detectors remain undis- closed.

                GPTZero seems to be better than some of the alternatives, but other discussions here on HN when it was launched highlight all of the false positives and false negatives it yielded:

                https://news.ycombinator.com/item?id=34556681

                https://news.ycombinator.com/item?id=34859348

                But all of that is pretty old, there have been a couple of posts in the last year about it, but both are about the business, rather than the quality of the tool itself.

                https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=fal...

                So, to check if now it's any better I tried it myself: I got it to yield a false negative (50% human and 50% AI rating, for a text which was wholly AI-generated), and I haven't got it to yield a false positive.

                But all of this is just anecdotal evidence, I haven't run a rigorous study.

                For sure, if some competent people believe that the tool won't generate false positives, I'll be mindful of it and (In the rare cases in which I write a long posts/blog articles, etc.) I'll check that it doesn't erroneously flag what I write.

                It's bittersweet: if a tool that can be relied upon really exist, that would be good news. But if that tool is closed source (just like ChatGPT, Gemini, etc.) that doesn't inspire confidence. What if the closed source detection tool will suddenly start erroneously flagging a subset of human texts which it didn't before?

                At least, even with the closed source LLMs, we have a bunch of papers that explain their mechanism. I hope that GPTZero will be more forthcoming about the way it works.

          • By peteforde 2025-02-2718:00

            I do worry that the thought process involving feeding everything you consume into an AI powered "AI detector" is a slippery slope to a dystopian hell of one's own making.

            You don't have to extrapolate too many iterations before you start arguing that you can't trust you're living in base reality.

            My take on both issues is that even if you're right, the choice is whether to spend your life in a state of cynical paranoia, or not.

            If you figure out the consciousness equivalent of Ctrl-Shift-Esc, please report back.

          • By tim333 2025-02-2214:031 reply

            You can kind of tell it's not AI when it gets beyond the generic stuff and on to say

            >Today I'm working on Chroma Cloud, designed for exploring and managing large datasets, and Next.js powers its advanced interactions and data loading requirements.

            which is unlikely to have been written by an LLM.

            • By fsndz 2025-02-2214:441 reply

              you can inject personal stuff to make it feel original, but huge chunks are still AI-generated. Just get the first 4/5 paragraphs and paste in gptzero

              • By tim333 2025-02-2215:231 reply

                Well on the one hand you have gptzero saying it's in the style of AI which I don't count as reliable and on the other you have the author saying it's not which I weight higher.

                And it mostly makes too much sense apart from "most drivers don't know how many gears their car has" which has me thinking huh? It's usually written on the shifter.

    • By k4runa 2025-02-2119:05

      Nice

  • By graypegg 2025-02-2118:165 reply

    I really like web apps that are just CRUD forms. It obviously doesn't work for everything, but the "list of X -> form -> updated list of X" user experience works really well for a lot of problem domains, especially ones that interact with the real world. It lets you name your concepts, and gives everything a really sensible place to change it. "Do I have an appointment, let me check the list of appointments".

    Contrast that, to more "app-y" patterns, that might have some unifying calendar, or mix things into a dashboard. Those patterns are also useful!! And of course, all buildable in rails as well. But there is something nice about the simplicity of CRUD apps when I end up coming across one.

    So even though you can build in any style with whatever technology you want:

    Rails feels like it _prefers_ you build "1 model = 1 concept = 1 REST entity"

    Next.js (+ many other FE libraries in this react-meta-library group) feels like it _prefers_ you build "1 task/view = mixed concepts to accomplish a task = 1 specific screen"

    • By zdragnar 2025-02-2119:0014 reply

      The problem with 1 model = 1 rest entity (in my experience) is that designers and users of the applications I have been building for years never want just one model on the screen.

      Inevitably, once one update is done, they'll say "oh and we just need to add this one thing here" and that cycle repeats constantly.

      If you have a single page front end setup, and a "RESTful" backend, you end up making a dozen or more API calls just to show everything, even if it STARTED out as narrowly focused on one thing.

      I've fought the urge to use graphql for years, but I'm starting to think that it might be worth it just to force a separation between the "view" of the API and the entities that back it. The tight coupling between a single controller, model and view ends up pushing the natural complexity to the wrong layer (the frontend) instead of hiding the complexity where it belongs (behind the API).

      • By LargeWu 2025-02-2119:144 reply

        Why the assumption that an API endpoint should be a 1:1 mapping to a database table? There is no reason we need to force that constraint. It's perfectly legitimate to consider your resource to encompass the business logic for that use case. For example, updating a user profile can involve a single API call that updates multiple data objects - Profile, Address, Email, Phone. The UI should be concerned with "Update Profile" and let the API controller orchestrate all the underlying data relationships and updates.

        • By jaredklewis 2025-02-2119:593 reply

          You seem to be in agreement with the parent, who argues 1 model (aka database row) = 1 rest entity (aka /widgets/123) is a bad paradigm.

          Different widget related front-end views will need different fields and relations (like widget prices, widget categories, user widget history and so on).

          There are lots of different solutions:

          - Over fetching. /widgets/123 returns not only all the fields for a widget, but more or less every possible relation. So a single API call can support any view, but with the downside that the payload contains far more data than is used by any given view. This not only increases bandwidth but usually also load on the database.

          - Lots of API calls. API endpoints are tightly scoped and the front-end picks whichever endpoints are needed for a given view. One view calls /widgets/123 , /widgets/123/prices and /widgets/123/full-description. Another calls /widgets/123 and /widgets/123/categories. And so on. Every view only gets the data it needs, so no over fetching, but now we're making far more HTTP requests and more database queries.

          - Tack a little "query language" onto your RESTful endpoints. Now endpoints can do something like: /widgets/123?include=categories,prices,full-description . Everyone gets what they want, but a lot of complexity is added to support this on the backend. Trying to automate this on the backend by having code that parses the parameters and automatically generates queries with the needed fields and joins is a minefield of security and performance issues.

          - Ditch REST and go with something like GraphQL. This more or less has the same tradeoffs as the option above on the backend, with some additional tradeoffs from switching out the REST paradigm for the GraphQL one.

          - Ditch REST and go RPC. Now, endpoints don't correspond to "Resources" (the R in rest), they are just functions that take arguments. So you do stuff like `/get-widget-with-categories-and-prices?id=123`, `/get-widget?id=123&include=categories,prices`, `/fetch?model=widget&id=123&include=categories,prices` or whatever. Ultimate flexibility, but you lose the well understood conventions and organization of a RESTful API.

          After many years of doing this lots of time, I pretty much dislike all the options.

          • By procaryote 2025-02-229:141 reply

            Lots of API calls scales pretty well, as long as those APIs aren't all hitting the same database. You can do them in parallel. If you really need to you can build a view specific service on the backend to do them in parallel but with shorter round-trips and perhaps shared caches, and then deliver a more curated response to the frontend.

            If you just have one single monolithic database, anything clever you do on the other levels just lets you survive until the single monolithic database becomes the bottle-neck, where unexpected load in one endpoint breaks several others.

            • By jupp0r 2025-02-2220:131 reply

              "you can do them in parallel" - not in Rails.

              • By procaryote 2025-02-238:43

                Well, you could do them in parallel from the client to independent endpoints.

                But yeah, rails might be a bad match

          • By jbverschoor 2025-02-222:151 reply

            Webapps are going back to multiple requests because of http2 / quic multiplexing.

            • By jupp0r 2025-02-2220:171 reply

              This solves the problem of slow transport between your frontend and your backend, but it will still incur a lot of unnecessary load on the database as well as compute on your backend (which isn't normally a problem unless you're using something really slow like Rails).

              • By jbverschoor 2025-02-2411:29

                Why? Queries would still have to be done. Yes, a few things would be duplicated (authentication), but on the other hand, queries can be cached at a more fine grained level. It's easier to cache 3 separate queries of which one can be re-used later, than to cache one monster query. s/query/response

          • By cetu86 2025-02-2121:141 reply

            So what do you do instead?

            • By jaredklewis 2025-02-2122:36

              I do one or some combination of the options above. I've also tried some more exotic variations of things on the list like Hasura or following jsonapi.org style specs. I haven't found "the one true way" to structure APIs.

              When a project is new and small, whatever approach I take feels amazing and destined to work well forever. On big legacy projects or whenever a new project gets big and popular, whatever approach I took starts to feel like a horrible mess.

        • By wahnfrieden 2025-02-2119:41

          Rails began that trend by auto-generating "REST" routes for 1:1 table mapping to API resource. By making that so easy, they tricked people into idealizing it

          Rails' initial rise in popularity coincided with the rise of REST so these patterns spread widely and outlasted Rails' mindshare

        • By 0x457 2025-02-2119:51

          No, it's an API Entity can be composed of sub-entities which may or may not exposed directly via API.

          That's what https://guides.rubyonrails.org/association_basics.html is for.

          However, Rails scaffolding is heavily geared towards that 1:1 mapping - you can make all CRUD endpoints, model and migration with a single command.

        • By rtpg 2025-02-226:57

          If you lean into more 1:1 mappings (not that a model can't hold FKs to submodels), then everything gets stupid easy. Not that what you're saying is hard... just if you lean into 1:1 it's _very easy_. At least for Django that's the vibe.

      • By graypegg 2025-02-2119:29

        I have actually had a different experience. I feel like I've run into "we can't just see/edit the thing" more often than "we want another thing here" with users. Naming a report is the kiss of death. "Business Report" ends up having half the data you need, rather than just a filterable list of "transactions" for example.

        However, I'm biased. A lot of my jobs have been writing "backoffice" apps, so there's usually models with a really clear identity associated to them, and usually connected to a real piece of paper like a shipment form (logistics), a financial aid application (edtech), or a kitchen ticket (restaurant POS).

        Those sorts of applications I find break down with too many "Your school at a glance" sort of pages. Users just want "all the applications so I can filter to just the ones who aren't submitted yet and pester those students".

        And like many sibling comments mention, Rails has some good answers for combining rest entities onto the same view in a way that still makes them distinct.

      • By dmix 2025-02-2119:231 reply

        Turbo frames solves a lot of this. https://turbo.hotwired.dev/

        Multiple models managed on a single page, each with their own controllers and isolated views.

        • By pdimitar 2025-02-229:082 reply

          Or you can do it right and use Elixir's LiveView, from which everyone is getting inspired these days.

          • By xutopia 2025-02-2214:53

            LiveView is the brainchild of Chris McCord. He did the prototype on Rails before getting enamoured by Elixir and building Phoenix to popularize the paradigm.

            LiveView is amazing and so is Phoenix but Rails has better support for building mobile apps using Hotwire Native.

          • By dmix 2025-02-233:45

            Not everyone can make a dramatic switch of languages and frameworks. Turbo is excellent at what it does. Pure joy to use, replacing much of our Vue frontend.

      • By stickfigure 2025-02-222:451 reply

        > you end up making a dozen or more API calls just to show everything

        This is fine!

        > I've fought the urge to use graphql for years

        Keep fighting the urge. Or give into it and learn the hard way? Either way you'll end up in the same place.

        The UI can make multiple calls to the backend. It's fine.

        Or you can make the REST calls return some relations. Also fine.

        What you can't do is let the client make arbitrary queries into your database. Because somebody will eventually come along and abuse those APIs. And then you're stuck whitelisting very specific queries... which look exactly like REST.

        • By gedy 2025-02-224:472 reply

          GraphQL is not arbitrary queries into your database! Folks need to really quit misunderstanding that.

          You can define any schema and relations you want, it's not an ORM.

          • By stickfigure 2025-02-226:101 reply

            In the spectrum of "remote procedure call" on one end and "insert sql here" on the other end, GraphQL is waaaaay closer to SQL than RPC.

            • By ako 2025-02-2211:552 reply

              No it’s not, graphql is an rpc that returns a tree of objects where you can indicate what part of the tree is relevant to you.

              • By whstl 2025-02-2212:53

                Yep. It is not trivial to make it into a pseudo-SQL language, like Hasura did.

                Funny enough, see this assumption frustrating a lot of people who try to implement GraphQL APIs like this.

                And even if you do turn it into a pseudo-SQL, there's still plenty of control. Libraries allow you to restrict depth, restrict number of backend queries, have a cost function, etc.

              • By stickfigure 2025-02-2214:311 reply

                ...and that's exactly the problem! Without a lot of hardening, I (a hostile client) can suck down any part of the database you make available. With just a few calls.

                GraphQL is too powerful and too flexible to offer to an untrusted party.

                • By gedy 2025-02-2215:142 reply

                  This is a silly argument and sounds like a hot take from someone who's never used this. You could say the same about REST or whatever. It has nothing to do with "the database".

                  • By stickfigure 2025-02-2217:27

                    You sound like someone that's never had an adversarial client. I spent years reverse engineering other companies' web APIs. I'm also responsible for a system that processes 11 figures of financial transactions, part of which (for now) is an incredibly annoying GraphQL API that gets abused regularly.

                    REST calls are fairly narrowly tailored, return specific information, and it's generally easy to notice when someone is abusing them. "More like RPC".

                    Your naive GraphQL API, on the other hand, will let me query large chunks of your database at a time. Take a look at Shopify's GraphQL API to see the measures you need to take to harden an API; rate limits defined by the number of nodes returned, convoluted structures to handle cursoring.

                    GraphQL is the kind of thing that appeals to frontend folks because they can rebalance logic towards the frontend and away from the backend. It's generally a bad idea.

                  • By gedy 2025-02-2217:34

                    > Your naive GraphQL API, on the other hand, will let me query large chunks of your database at a time

                    No it won't, because it's not tied directly to the database and does not allow for arbitrary queries.

                    Any of the "aha!" gotchas you mention are the same issues as you could have with REST, JSON-API, etc.

                    I'm sorry you don't understand what I'm pointing out, but thanks for the convo though.

          • By what 2025-02-226:172 reply

            It is arbitrary queries though? I can send any query that matches your schema and your graphql engine is probably going to produce some gnarly stuff to satisfy those queries.

            • By whstl 2025-02-2213:14

              You need to program every query resolver yourself, it's not tied to some ORM.

              There are of course products that do this automatically, but it's not really that simple. There's a reason things like Hasura are individual products.

            • By gedy 2025-02-2212:451 reply

              No when I say "schema" I mean the GraphQL structure, not your DB schema.

              The GraphQL structure can be totally independent from your DB if need be, and (GraphQL) queries on those types via API can resolve however you need and are defined by you. It's not a SQL generator.

              • By stickfigure 2025-02-2214:341 reply

                The problem is not that you'll expose some part of the database you shouldn't (which is a concern but it's solvable). The problem is that you expose the ability for a hostile client to easily suck down vast swaths of the part of the database you do expose.

                • By foobazgt 2025-02-2216:282 reply

                  How is this different from REST?

                  • By stickfigure 2025-02-2217:34

                    Generally, REST calls are narrowly tailored with a simple contract; there are some parameters in and some specific data out. This tends to be easy to secure, has consistent performance and load behavior, and shows up in monitoring tools when someone starts hammering it.

                    On the other hand, unless you've put some serious work into hardening, I can craft a GraphQL query to your system that will produce way more data (and way more load) than you would prefer.

                    A mature GraphQL web API (exposed to adversaries) ends up whitelisting queries. At which point it's no better than REST. Might as well just use REST.

                  • By gedy 2025-02-2216:42

                    I think the OP is possibly confusing GraphQL with an ORM like Active Record. You are correct that you don't accidentally "expose" any more data than you do with REST or some other APIs. It's just a routing and payload convention. GraphQL schema and types don't have to be 1:1 with your DB or ActiveRecord objects at all.

                    (I'm not aware of any, but if there are actually gems or libraries that do expose your DB to GraphQL this way, that's not really a GraphQL issue)

      • By andrei_says_ 2025-02-2119:17

        This is a very common pattern and one that’s been solved in Rails by building specialized controllers applying the CRUD interface to multiple models.

        Like the Read for a dashboard could have a controller for each dashboard component to load its data or it could have one controller for the full dashboard querying multiple models - still CRUD.

        The tight coupling is one of many approaches and common enough to be made default.

      • By aantix 2025-02-2119:39

        The Rails support for multi-model, nested form updates is superb.

        Separate entities on the backend - a unified update view if that’s what’s desired.

        No need for any outside dependencies.

      • By procaryote 2025-02-229:25

        You can separate the view and the backend storage without going graphql. You can build your API around things that make sense on a higher level, like "get latest N posts in my timeline" and let the API endpoint figure out how to serve that

        It's seemingly more work than graphql as you need to actually intentionally build your API, but it gets you fewer, more thought-out usage patterns on the backend that are easier to scale.

      • By cultofmetatron 2025-02-2120:33

        You should checkout phoenix liveview. you can maintain a stateful process on the server that pushes state changes to the frontend. its a gamechanger if you're building a webapp.

        https://www.youtube.com/watch?v=aOk67eT3fpg&ab_channel=Theo-...

      • By cosmic_cheese 2025-02-2220:43

        This may be a misunderstanding on my part, but something that’s kept me away from GraphQL is how it makes for a hard dependency on GraphQL client libraries in clients. I find that very unappealing, it’s nicer to be able to e.g. just use platform/language provided networking and JSON decoding (e.g. URLSession + Swift Codable on iOS) and keep the dependency list that much shorter.

      • By grncdr 2025-02-2222:14

        > If you have a single page front end setup, and a "RESTful" backend

        Rails really doesn't encourage this architecture, quite the opposite in fact.

        > designers and users of the applications I have been building for years never want just one model on the screen.

        ... and this is where Rails excels. When you need to pull in some more data for a screen you just do it. Need to show the most recent reviews of a product in your e-commerce backend? It's probably as simple as:

            <%= render @product.reviews.order(created_at: :desc).limit(5) %>
        
        Of course this can have the opposite problem of bloated views taking forever to load, but Rails has lots of caching goodies to mitigate that.

        ---

        Going back to the GP post

        > Rails feels like it _prefers_ you build "1 model = 1 concept = 1 REST entity"

        That's definitely the simple path, but there are very few constraints on what you can do.

      • By loodish 2025-02-2212:292 reply

        Graphql is nice but there are all sorts of weird attacks and edge cases because you don't actually control the queries that a client can send. This allows a malicious client to craft really time expensive queries.

        So you end up having to put depth and quantity limits, or calculating the cost of every incoming query before allowing it. Another approach I'm aware of is whitelisting but that seems to defeat the entire point.

        I use rest for new projects, I wouldn't say never to graphql, but it brings a lot of initial complexity.

        • By foobazgt 2025-02-2216:24

          I don't understand why you consider this to be a burden. The gateway will calculate the depth / quantities of any query for you, so you're just setting a config option. When you create a REST API, you're making similar kinds of decisions, except you're baking them bespokely into each API.

          Query whitelisting makes sense when you're building an API for your own clients (whom you tightly control). This is the original and most common usecase for graphql, though my personal experience is with using it to provide 3rd party APIs.

          It's true that you can't expect to do everything identically to how you would have done it with REST (authz will also be different), but that's kind of the point.

        • By motogpjimbo 2025-02-2213:53

          A malicious user who had the knowledge and ability to craft expensive GraphQL queries could just as easily use that knowledge to tie your REST API in knots by flooding it with fake requests. Some kind of per-user quota system is going to be required either way.

      • By mr-ron 2025-02-2119:551 reply

        Isn’t this there bff stacks show their worth? As in those nextjs apps that sit between react and rails?

        • By zdragnar 2025-02-2120:30

          Not really, then you're just shifting the complexity from the front-end back to a middle man. Now it still exists, and you still have all the network traffic slowing things down, but it lives in its own little service that your rails devs aren't going to bother thinking about or looking at optimizing.

          Much better to just do that in rails in the first place.

      • By saltynutz 2025-02-223:13

        [flagged]

    • By globular-toast 2025-02-2210:082 reply

      > I really like web apps that are just CRUD forms.

      I really like easy problems too. Unfortunately, creating database records is hardly a business. With a pure CRUD system you're only one step away from Excel really. The business will be done somewhere else and won't be software driven at all but rather in people's heads and if you're lucky written in "SOP" type documents.

      • By searls 2025-02-2218:462 reply

        As someone who co-founded one of the most successful Ruby on Rails consultancies in the world: building CRUD apps is a _fantastic_ business.

        There are two types of complexity: essential and incidental. Sometimes, a straightforward CRUD app won't work because the product's essential complexity demands it. But at least as often, apps (and architectures, and engineering orgs, and businesses) are really just CRUD apps with a bunch of incidental complexity cluttering up the joint and making everything confusing, painful, and expensive.

        I've served dozens of clients over my career, and I can count on one hand the number of times I've found a company whose problem couldn't more or less be solved with "CRUD app plus zero-to-one interesting features." No technologist wants to think they're just building a series of straightforward CRUD apps, so they find ways to complicate it. No businessperson wants to believe their company isn't a unique snowflake, so they find ways to complicate it. No investor wants to pour their money into yet another CRUD app, so they invent a story to complicate it.

        IME, >=90% of application developers working today are either building CRUD apps or would be better off if they realized they were building CRUD apps. To a certain extent, we're all just putting spreadsheets on the Internet. I think this—more than anything else—explains Rails' staying power. I remember giving this interview on Changelog ( https://changelog.com/podcast/521 ) and the host Adam asking about the threat Next.js posed to Rails, and—maybe I'd just seen this movie too many times since 2005—it didn't even register as a possible contender.

        Any framework that doesn't absolutely nail a batteries-included CRUD feature-set as THE primary concern will inevitably see each app hobbled with so much baggage trying to roundaboutly back into CRUD that it'll fall over on itself.

        • By andrei_says_ 2025-02-2219:35

          Similar experience here. I see unnecessarily overengineered SPAs everywhere - from blogs to CRUD-only SAAS and read about devs starting each project as an SPA by default. Including blogs and static websites.

          The choice to spend 10x-50x the resources and deal with the agony of increasing complexity doesn’t make sense to me. Especially in the last few years since Rails’ Hotwire solves updating page fragments effortlessly.

        • By globular-toast 2025-02-2222:19

          I'm not sure I'm following what you're saying here. Are you saying that, ultimately, everything boils down to CRUD? Like how humans are really just a very elaborate chemical reaction? Or are you saying businesses are literally CRUD? As in you can charge money to create database records?

          Of course everything is just CRUD. That's all a database can do. But writing every piece of software at that level is insanity. When I say pure CRUD I mean software that is literally just a thin veneer over a database. Now that actually is useful sometimes but generally you'll want to be able to write higher level abstractions so you can express your code in a more powerful language than CRUD. Are you really saying you've consulted for businesses that just do CRUD? As in they have meetings about creating, updating and deleting database records?

      • By nlitened 2025-02-2212:561 reply

        I actually believe that most of useful real-world software is “one step away from Excel”, and that’s fine

        • By globular-toast 2025-02-2315:21

          I think it's two steps away from Excel. The first step is making schemas explicit and doing normalisation to avoid data anomalies. This is where RoR gets you. The second step is naming the operations/use cases in your business/domain (preferably with words people already use) rather than trying to frame everything as CRUD operations.

    • By adsteel_ 2025-02-2119:28

      Rails is set up for that, but it doesn't force you to build like that. You're free to build in other patterns that you design yourself. It's nice to have simple defaults with the freedom to opt into more complexity only if and when you need it.

    • By andrei_says_ 2025-02-239:21

      Figma, online photo and video editors, canva and browser based games are the only non-CRUD examples I can think of from recent memory.

    • By philip1209 2025-02-2118:29

      Yeah, I agree.

      Too many degrees of freedom can degrade an experience, if not used properly.

  • By Sincere6066 2025-02-223:284 reply

    Why is the ruby/rails community so weird. Half of us just quietly make stuff, but the other half seems to need to sporadically reassure everyone that it's not dead, actually.

    > Rails has started to show its age amid with the current wave of AI-powered applications.

    Not everything needs to have bloody AI.

    • By troad 2025-02-226:23

      > Why is the ruby/rails community so weird. Half of us just quietly make stuff, but the other half seems to need to sporadically reassure everyone that it's not dead, actually.

      Half the net merrily runs on PHP and jQuery. Far more if you index on company profitability.

      > Not everything needs to have bloody AI.

      Some things are an anti-signal at this point. If a service provider starts talking about AI, what I hear is that I'm going to need to look for a new service provider pretty soon.

    • By zdragnar 2025-02-226:211 reply

      Based on what I've seen from job postings in the US, you can't start a company in healthcare right now unless you've got AI featuring prominently.

      Sadly, I'm not even talking cool stuff like imaging (though it's there too), but anything to do with clinical notes to insurance is all AI-ified.

      Truly, it is the new crypto-web3 hype train, except there'll be a few useful things to come out of it too.

      • By GuardianCaveman 2025-02-229:261 reply

        Yes now at doctors offices you have the option to sign an agreement for the doctor to wear a microphone to record the conversation and then AI tool automatically creates a report for the doctor. AI and all aspects of medicine seem to be merging.

        • By einsteinx2 2025-02-2210:441 reply

          This kind of thing scares me knowing how bad AI meeting and document summaries are, at least what I’ve used. Missing key details, misinterpreting information, hallucinating things that weren’t said…boy I can’t wait for my doctor to use an AI summary of my visit to incorrectly diagnose me!

    • By dismalaf 2025-02-224:301 reply

      > Not everything needs to have bloody AI.

      And even if it did, the Ruby eco-system has AI stuff...

      • By philip1209 2025-02-226:021 reply

        ankane to the rescue, as normal

        • By dismalaf 2025-02-226:42

          True hah. Of course even if they didn't already most AI libs are actually C++ libs that Python interfaces with, and Ruby has probably the best FFI of any language.

    • By pmontra 2025-02-2213:27

      A former customer of mine is creating AI apps with Rails. After all what one is those apps need is to call an API and output the results. Rails or any other system are more than capable of doing that.

HackerNews