June 9, 2025 PRESS RELEASE Apple supercharges its tools and technologies for developers to foster creativity, innovation, and design Access to the on-device Apple Intelligence model, large language…
June 9, 2025
PRESS RELEASE
Apple supercharges its tools and technologies for developers to foster creativity, innovation, and design
Access to the on-device Apple Intelligence model, large language model integration in Xcode, and an elegant new software design across Apple platforms give developers everything they need to build beautiful modern apps with speed and confidence
CUPERTINO, CALIFORNIA Apple today announced new technologies and enhancements to its developer tools to help developers create more beautiful, intelligent, and engaging app experiences across Apple platforms. A beautiful new software design brings more focus to content, and delivers more expressive and delightful experiences across iOS 26, iPadOS 26, macOS Tahoe 26, watchOS 26, and tvOS 26,1 while keeping them all instantly familiar. The Foundation Models framework joins a suite of tools that allow developers to tap into on-device intelligence, and Xcode 26 leverages large language models like ChatGPT, giving them access to Xcode’s Coding Tools and other intelligent features.
These new resources join the extensive and continuously evolving set of technologies Apple offers developers, including over 250,000 APIs that enable developers to integrate their apps with Apple’s hardware and software features. These APIs span a wide range of capabilities, such as machine learning, augmented reality, health and fitness, spatial computing, and high-performance graphics. With each platform release, Apple expands and refines its technologies and tools to assist developers in bringing their ideas to life and delivering rich, responsive, and optimized experiences across Apple platforms.
“Developers play a vital role in shaping the experiences customers love across Apple platforms,” said Susan Prescott, Apple’s vice president of Worldwide Developer Relations. “With access to the on-device Apple Intelligence foundation model and new intelligence features in Xcode 26, we’re empowering developers to build richer, more intuitive apps for users everywhere.”
New Design with Liquid Glass
The elegant new design gives developers the opportunity to make their apps more expressive and delightful, while being instantly familiar. It’s crafted with a new software-based material called Liquid Glass, which combines the optical qualities of glass with a sense of fluidity. This gorgeous new material extends from the smallest elements users interact with every day — like buttons, switches, sliders, text, and media controls — to larger elements, including tab bars and sidebars for navigating apps.
Native frameworks like SwiftUI give developers everything they need to adopt the new design in their apps. The universal design allows developers to bring greater focus to their users’ content, establishing a consistent experience when developing across Apple’s platforms.
With the all-new Icon Composer app, developers and designers are empowered to create visually captivating app icons that enhance their app’s identity. This powerful tool helps create a consistent visual identity for app icons by annotating layers for multiple rendering modes, with advanced features that include blurring, adjusting translucency, testing specular highlights, and previewing icons in various tints.
Foundation Models Framework
With the Foundation Models framework, developers will be able to build on Apple Intelligence to bring users new experiences that are intelligent, available when they’re offline, and that protect their privacy, using AI inference that is free of cost.
The framework has native support for Swift, so developers can easily access the Apple Intelligence model with as few as three lines of code. Guided generation, tool calling, and more are all built into the framework, making it easier than ever to implement generative capabilities right into an existing app. For example, Automattic is using the framework in its Day One journaling app to bring users privacy-centric intelligence features.
“The Foundation Model framework has helped us rethink what’s possible with journaling,” said Paul Mayne, head of Day One at Automattic. “Now we can bring intelligence and privacy together in ways that deeply respect our users.”
Xcode 26
Xcode 26 is packed with intelligence features and experiences to help developers make their ideas a reality.
Developers can connect large language models directly into their coding experience to write code, tests, and documentation; iterate on a design; fix errors; and more. Xcode has built-in support for ChatGPT, and developers can use API keys from other providers, or run local models on their Mac with Apple silicon, to choose the model that best suits their needs. Developers can start using ChatGPT in Xcode without needing to create an account, and subscribers can connect their accounts to access more requests.2
Coding Tools help developers stay in the flow and be more productive in their tasks. Accessible from anywhere in a developer’s code, Coding Tools provide suggested actions like generating a preview or a playground, or fixing an issue, and can also handle specific prompts for other tasks right inline.
Xcode 26 comes with additional features to keep developers focused and productive, like a redesigned navigation experience, improvements to the localization catalog, and improved support for Voice Control to dictate Swift code and navigate the Xcode interface entirely by voice.
App Intents
App Intents lets developers deeply integrate their app’s actions and content with system experiences across platforms, including Siri, Spotlight, widgets, controls, and more.
This year, App Intents gains support for visual intelligence. This enables apps to provide visual search results within the visual intelligence experience, allowing users to go directly into the app from those results. For instance, Etsy is leveraging visual intelligence to enhance the user experience in its iOS app by facilitating faster and more intuitive discovery of goods and products.
“At Etsy, our job is to seamlessly connect shoppers with creative entrepreneurs around the world who offer extraordinary items — many of which are hard to describe. The ability to meet shoppers right on their iPhone with visual intelligence is a meaningful unlock, and makes it easier than ever for buyers to quickly discover exactly what they’re looking for while directly supporting small businesses,” said Etsy CTO Rafe Colburn.
Swift 6.2
Swift 6.2 introduces powerful features to enhance performance, concurrency, and interoperability with other languages like C++, Java, and JavaScript. And now, in collaboration with the open-source community, Swift 6.2 gains support for WebAssembly.
Building upon Swift 6’s strict concurrency checking, Swift 6.2 simplifies writing single-threaded code. Developers can now configure modules or individual files to run on the main actor by default, eliminating the need for additional annotations.
Containerization Framework
The Containerization framework enables developers to create, download, or run Linux container images directly on Mac. It’s built on an open-source framework optimized for Apple silicon and provides secure isolation between container images.
Tools and Resources for Games
Game Porting Toolkit 3 provides developers with updated tools for evaluating and profiling their game. Developers can now customize the Metal Performance HUD, and get onscreen insights and guidance for optimizing graphics code for the best possible performance in the evaluation environment. And developers can use Mac Remote Developer Tools for Windows to build Mac games on a remote Mac in their existing development workflows.
Metal 4 is designed exclusively for Apple silicon, and sets the stage for the next generation of games on Apple platforms with support for advanced graphics and machine learning technologies.
Developers can now run inference networks directly in their shaders to compute lighting, materials, and geometry, enabling highly realistic visual effects for their games. MetalFX Frame Interpolation generates an intermediate frame for every two input frames to achieve higher and more stable frame rates, and MetalFX Denoising makes real-time ray tracing and path tracing possible in the most advanced games.
The Apple Games app gives players a new all-in-one destination for all of their games and the friends they play them with on iPhone, iPad, and Mac. It also introduces a new dedicated app for developers to reengage their existing players and attract new ones.
Challenges give players a new way to compete with friends in score-based showdowns, turning single-player games into shared experiences. Developers that have Game Center leaderboards for their games can easily add challenges, offering players even more ways to rally a group, crown a winner, and have a rematch.
Game Overlay enhances in-game engagement by integrating Game Center features directly into gameplay. Players can access their next achievement and recent scores, and see which friends are currently playing, making it easy to start a chat — all without leaving the game. Players can also adjust settings and view the latest In-App Events, keeping them connected and in control without breaking immersion.
Managed Background Assets simplifies asset hosting for developers, giving them control over how their app or game downloads assets. Developers can self-host or opt for Apple-Hosted Background Assets, where Apple handles hosting. Every Apple Developer Program membership includes 200GB of Apple hosting capacity for the App Store. Apple-Hosted Background Assets can be submitted separately from an app build.
Tools to Help Protect Kids Online
To ensure kids have enjoyable, enriching, and appropriate in-app experiences, developers can utilize a range of tools — including parental controls and the Sensitive Content Analysis framework — to enhance child safety and ensure privacy. Building on these existing tools, developers can use the new Declared Age Range API to deliver age-appropriate content based on a user’s age range. When developers implement this API, parents can allow their children to share their age range without disclosing a birthdate or other sensitive information, enabling developers to tailor experiences accordingly. The feature is built around privacy: Age range data is shared only if parents choose to allow it, and they can disable sharing at any time.
New App Store Accessibility and App Store Connect Features
New Accessibility Nutrition Labels for App Store product pages help users learn which accessibility features are supported before they download an app or game.
Developers can now share information in App Store Connect about their app or game’s support, such as whether it includes VoiceOver, Voice Control, Larger Text, Captions, and more. An Accessibility Nutrition Label will appear on their app’s product page, specific to each platform it supports. Developers can also add a URL on their app’s App Store product page that links users to a website with more details.
The App Store Connect app on iOS and iPadOS has been updated to let developers view TestFlight screenshots and crash feedback, in addition to receiving push notifications when beta testers provide feedback. The App Store Connect API supports these enhancements, and introduces the ability for developers to create webhooks to get real-time updates, and support for Apple-Hosted Background Assets and Game Center configuration.
Availability
Today’s updates join the ever-expanding collection of intelligent and powerful tools and technologies Apple provides to developers. The Apple Intelligence features detailed require supported devices, which include all iPhone 16 models, iPhone 15 Pro, iPhone 15 Pro Max, iPad mini (A17 Pro), and iPad and Mac models with M1 and later that have Apple Intelligence enabled and Siri and device language set to the same supported language: English, French, German, Italian, Portuguese (Brazil), Spanish, Japanese, Korean, or Chinese (simplified). More languages will be coming by the end of this year: Danish, Dutch, Norwegian, Portuguese (Portugal), Swedish, Turkish, Chinese (traditional), and Vietnamese. For more information, visit apple.com/apple-intelligence. Features are subject to change. Some features may not be available in all languages or regions, and availability may vary due to local laws and regulations. For more information about availability, visit apple.com.
All of these features are available for testing starting today through the Apple Developer Program at developer.apple.com, and a public beta will be available through the Apple Beta Software Program next month at beta.apple.com.
About Apple Apple revolutionized personal technology with the introduction of the Macintosh in 1984. Today, Apple leads the world in innovation with iPhone, iPad, Mac, AirPods, Apple Watch, and Apple Vision Pro. Apple’s six software platforms — iOS, iPadOS, macOS, watchOS, visionOS, and tvOS — provide seamless experiences across all Apple devices and empower people with breakthrough services including the App Store, Apple Music, Apple Pay, iCloud, and Apple TV+. Apple’s more than 150,000 employees are dedicated to making the best products on earth and to leaving the world better than we found it.Press Contacts
Adam Dema
Apple
Apple Media Helpline
There's a different thread if you want to wax about Fluid Glass etc [1], but there's some really interesting new improvements here for Apple Developers in Xcode 26.
The new foundation frameworks around generative language model stuff looks very swift-y and nice for Apple developers. And it's local and on device. In the Platforms State of the Union they showed some really interesting sample apps using it to generate different itineraries in a travel app.
The other big thing is vibe-coding coming natively to Xcode through ChatGPT (and other) model integration. Some things that make this look like a nice quality-of-life improvement for Apple developers is the way that it tracks iterative changes with the model so you can rollback easily, and the way it gives context to your codebase. Seems to be a big improvement from the previous, very limited GPT integration with Xcode and the first time Apple Developers have a native version of some of the more popular vibe-coding tools.
Their 'drag a napkin sketch into Xcode and get a functional prototype' is pretty wild for someone who grew up writing [myObject retain] in Objective-C.
Are these completely ground-breaking features? I think it's more what Apple has historically done which is to not be first into a space, but to really nail the UX. At least, that's the promise – we'll have to see how these tools perform!
> And it's local and on device.
Does that explain why you don't have to worry about token usage? The models run locally?
> You don’t have to worry about the exact tokens that Foundation Models operates with, the API nicely abstracts that away for you [1]
I have the same question. Their Deep dive into the Foundation Models framework video is nice for seeing code using the new `FoundationModels` library but for a "deep dive", I would like to learn more about tokenization. Hopefully these details are eventually disclosed unless someone else here already knows?
[1] https://developer.apple.com/videos/play/wwdc2025/301/?time=1...
I guess I'd say "mu", from a dev perspective, you shouldn't care about tokens ever - if your inference framework isn't abstracting that for you, your first task would be to patch it to do so.
To parent, yes this is for local models, so insomuch worrying about token implies financial cost, yes
Ish - it always depends how deep in the weeds you need to get. Tokenisation impacts performance, both speed and results, so details can be important.
I maintain a llama.cpp wrapper, on everything from web to Android and cannot quite wrap my mind around if you'd have any more info by getting individual token IDs from the API, beyond what you'd get from wall clock time and checking their vocab.
I don’t really see a need for token IDs alone, but you absolutely need per-token logprob vectors if you’re trying to do constrained decoding
Interesting point, my first reaction was "why do you need logprobs? We use constrained decoding for tool calls and don't need them"...which is actually false! Because we need to throw out those log probs then find the highest log prob of a token meeting the constraints.
Haha yeah. I’ve seen you mention the llama cpp wrapper elsewhere, it sounds cool! I’ve worked enough with vLLM and sglang to get angry at xgrammar, which I believe has some common ancestry with the GGML stack (GBNF if I’m not mistaken, which I may be). The constrained decoding part is as simple as you’d expect, just applies a bitmask to the logprobs during the “logit processing” and continuing as normal.
Do we have the vocab? That's part of the point here. Does it take images? How are they tokenised?
The direction the software engineering is going in with this whole "vibe coding" thing is so depressing to me.
I went into this industry because I grew up fascinated by computers. When I learned how to code, it was about learning how to control these incredible machines. The joy of figuring something out by experimenting is quickly being replaced by just slamming it into some "generative" tool.
I have no idea where things go from here but hopefully there will still be a world where the craft of hand writing code is still valued. I for one will resist the "vibe coding" train for as long as I possibly can.
To be meta about it, I would argue that thinking "generatively" is a craft in and of itself. You are setting the conditions for work to grow rather than having top-down control over the entire problem space.
Where it gets interesting is being pushed into directions that you wouldn't have considered anyway rather than expediting the work you would have already done.
I can't speak for engineers, but that's how we've been positioning it in our org. It's worth noting that we're finding GenAI less practical in design-land for pushing code or prototyping, but insanely helpful helping with research and discovery work.
We've been experimenting with more esoteric prompts to really challenge the models and ourselves.
Here's a tangible example: Imagine you have an enormous dataset of user-research, both qual and quant, and you have a few ideas of how to synthesize the overall narrative, but are still hitting a wall.
You can use a prompt like this to really get the team thinking:
"What empty spaces or absences are crucial here? Amplify these voids until they become the primary focus, not the surrounding substance. Describe how centering nothingness might transform your understanding of everything else. What does the emptiness tell you?"
or
"Buildings reveal their true nature when sliced open. That perfect line that exposes all layers at once - from foundation to roof, from public to private, from structure to skin.
What stories hide between your floors? Cut through your challenge vertically, ruthlessly. Watch how each layer speaks to the others. Notice the hidden chambers, the unexpected connections, the places where different systems touch.
What would a clean slice through your problem expose?"
LLM's have completely changed our approach to research and, I would argue, reinvigorated an alternate craftsmanship to the ways in which we study our products and learn from our users.
Of course the onus is on us to pick apart the responses for any interesting directions that are contextually relevant to the problem we're attempting to solve, but we are still in control of the work.
Happy to write more about this if folks are interested.
Reading this post is like playing buzz word bingo!
Personally I still love the craft of software. But there are times where boilerplate really kills the fun of setting something up, to take one example.
Or like this week I was sick and didn't have the energy to work in my normal way and it was fun to just tell ChatGPT to build a prototype I had in mind.
We live in a world of IKEA furniture - yet people still desire handmade furniture, and people still enjoy and take deep satisfaction in making them.
All this to say I don't blame you for being dismayed. These are fairly earth shattering developments we're living through and if it doesn't cause people to occasionally feel uneasy or even nostalgia for simpler times, then they're not paying attention.
I share your frustration. But for better or worse, computer language will eventually be replaced by human language. It's inevitable :(
This sounds like a boomer trying to resist using Google in favor of encyclopedias.
Vibe coding can be whatever you want to make of it. If you want to be prescriptive about your instructions and use it as a glorified autocomplete, then do it. You can also go at it from a high-level point of view. Either way, you still need to code review the AI code as if it was a PR.
Is any AI assisted coding === Vibe Coding now?
Coding with an AI can be whatever one can achieve, however I don’t see how vibe coding would be related to an autocomplete: with an autocomplete you type a bit of code that a program (AI or not) complete. In VC you almost doesn’t interact with the editor, perhaps only for copy/paste or some corrections. I’m not even sure for the manual "corrections" parts if we take Simon Willinson definition [0], which you’re not forced to obviously, however if there’s contradictory views I’ll be glad to read them.
0 > If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book—that's using an LLM as a typing assistant
https://arstechnica.com/ai/2025/03/is-vibe-coding-with-ai-gn...
(Your may also consider rethinking your first paragraph up to HN standards because while the content is pertinent, the form sounds like a youngster trying to demo iKungFu on his iPad to Jackie Chan)
Vibe coding is pretty broad and is a spectrum
> Vibe coding (or vibecoding) is an approach to producing software by using artificial intelligence (AI), where a person describes a problem in a few natural language sentences as a prompt to a large language model (LLM) tuned for coding. The LLM generates software based on the description, shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code.[1][2][3]
This sounds like someone who doesn't actually know how to code, doesn't enjoy the craft, and probably only got into the industry because it pays well and not because they actually enjoy it.
I enjoy it, but I enjoy what the product enables me to do more than the process; It's a means to an end for me and the process is great, but it gets tedious after more than a decade of it.
I also like cooking, but I like eating more than the actual cooking. It's an means to an end, and I don't need to always enjoy the cooking process.
No, that's what's separates the vibecoding from the glorified autocomplete. as originally defined, vibe coding doesn't include the final code review of the generated code, just a quick spot check, and then moving on to the next prompt.
The definition is broad and can include testing. Refining requires you to review the code for iterations.
> Vibe coding (or vibecoding) is an approach to producing software by using artificial intelligence (AI), where a person describes a problem in a few natural language sentences as a prompt to a large language model (LLM) tuned for coding. The LLM generates software based on the description, shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code.[1][2][3]
Karpathy's definition of vibe coding as I understood it was just verbally directing an agent based on vibes you got from the running app without actually seeing the code.
https://en.wikipedia.org/wiki/Vibe_coding
> Vibe coding (or vibecoding) is an approach to producing software by using artificial intelligence (AI), where a person describes a problem in a few natural language sentences as a prompt to a large language model (LLM) tuned for coding. The LLM generates software based on the description, shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code.[1][2][3]
You can take an augmented approach, a sort of capability fusion, or you can spam regenerate until it works.
Not sure if this is supposed to be an insult... Should I probably lean into management at some point? Sure. But do I still enjoy coding and am I still quite capable (without AI assistance)? Yup.
So as long as I can, and as long as I still enjoy it, you'll find me writing code. Lucky to get payed to do this.
Oh, it's not. I'm an IC totally unwilling to become a manager. Some people just enjoy coding.
I might we wrong but I guess this will only works on iphone 16 devices and iphone 15 pro - thus drastically limits your user base and you would still have to use online API for most apps. I was hoping they provide free ai api on their private cloud for other devices even if also running small models
If you start writing an app now, by the time it's polished enough to release it, the iPhone 16 will already be a year old phone, and there will be plenty potential customers.
If your app is worthwhile, and gets popular in a few years, by that time iPhone 16 will be an old phone and a reasonable minimum target.
Skate to where the puck is going...
Developers could be adding a feature utilizing LLMs to their existing app that already has a large user base. This could be a matter of a few weeks from an idea ti shipping the feature. While competitors use API calls to just "get things done", you are trying to figure out how to serve both iPhone 16 and older users, and potentially Android/web users if your product is also available elsewhere. I don't see how an iPhone 16 only feature helps anyone's product development, especially when the quality still remains to be seen.
Basically this - network effects are huge. People will definitely by hardware if it solves a problem for them - so many people bought blackberries just for BBM.
Exactly, it can take at least a couple of years to get big/important apps to use iOS, macOS features. By that Iphone 16 would be quite common.
Drastically limits your user base for like 3 years.
Phones still get replaced often, and the people who don’t replace them are the type of people who won’t spend a lot of money on your app.
If the new foundation models are on device, does that mean they’re limited to information they were trained on up to that point?
Or do have the ability to reach out to the internet for up to the moment information?
In addition to context you provide, the API lets you programmatically declare tools
I hoped for a moment that "Containerization Framework" meant that macOS itself would be getting containers. Running Linux containers and VMs on macOS via virtualization is already pretty easy and has many good options. If you're willing to use proprietary applications to do this, OrbStack is the slickest, but Lima/Colima is fine, and Podman Desktop and Rancher Desktop work well, too.
The thing macOS really painfully lacks is not ergonomic ways to run Linux VMs, but actual, native containers-- macOS containers. And third parties can't really implement this well without Apple's cooperation. There have been some efforts to do this, but the most notable one is now defunct, judging by its busted/empty website[1] and deleted GitHub organization[2]. It required disabling SIP to work, back when it at least sort-of worked. There's one newer effort that seems to be alive, but it's also afflicted with significant limitations for want of macOS features[3].
That would be super useful and fill a real gap, meeting needs that third-party software can't. Instead, as wmf has noted elsewhere in these comments, it seems they've simply "Sherlock'd" OrbStack.
--
1: https://macoscontainers.org/
> The thing macOS really painfully lacks is not ergonomic ways to run Linux VMs, but actual, native containers-- macOS containers
Linux container processes run on the host kernel with extra sandboxing. The container image is an easily sharable and runnable bundle.
macOS .app bundles are kind of like container images.
You can sign them to ensure they are not modified, and put them into the “registry” (App Store).
The Swift ABI ensures it will likely run against future macOS versions, like the Linux system APIs.
There is a sandbox system to restrict file and network access. Any started processes inherit the sandbox, like containers.
One thing missing is fine grained network rules though - I think the sandbox can just define “allow outbound/inbound”.
Obviously “.app”s are not exactly like container images , but they do cover many of the same features.
You're kind of right. But at the same time they are nowhere close. The beauty of Linux containerization is that processes can be wholly ignorant that they are not in fact running as root. The containers get, what appear to them, to be the whole OS to themselves.
You don't get that in macOS. It's more of a jail than a sandbox. For example, as an app you can't, as far as I know, shell out and install homebrew and then invoke homebrew and install, say, postgres, and run it, all without affecting the user's environment. I think that's what people mean when they say macOS lacks native containers.
Good point, apps are missing the docker layered file system to isolate container file writes.
It's not that macoscontainers is empty, it's that the site is https://darwin-containers.github.io
Read more about it here - https://github.com/darwin-containers
The developer is very responsive.
One of Apple's biggest value props to other platforms is environment integrity. This is why their containerization / automation story is worse than e.g. Android.
Ah, that's great! I'd forgotten it moved and struggled to track it down.
Hard same. I wonder if this does anything different to the existing projects that would mean one could use the WSL2 approach where containerd is running in the Linux micro-VM. A key component is the RPC framework - seems to be how orbstack's `macctl` command does it. I see mention of GRPC, sandboxes and containers in the binfmt_misc handling code, which is promising:
https://github.com/apple/containerization/blob/d1a8fae1aff6f...
Providing isolated environments for CI machines and other build environments!
If the sandboxing features a native containerization system relied on were also exposed via public APIs, those could could also potentially be leveraged by developer tools that want to have/use better sandboxing on macOS. Docker and BuildKit have native support for Windows containers, for instance. If they could also support macOS the same way, that would be cool for facilitating isolated macOS builds without full fat VMs. Tools like Dagger could then support more reproducible build pipelines on macOS hosts.
It could also potentially provide better experiences for tools like devcontainers on macOS as well, since sharing portions of your filesystem to a VM is usually trickier and slower than just sharing those files with a container that runs under your same kernel.
For many of these use cases, Nix serves very well, giving "just enough" isolation for development tasks, but not too much. (I use devenv for this at work and at home.) But Nix implementations themselves could also benefit from this! Nix internally uses a sandbox to help ensure reproducible builds, but the implementation on macOS is quirky and incomplete compared to the one on Linux. (For reasons I've since forgotten, I keep it turned off on macOS.)
Clean build environments for CICD workflows, especially if you're building/deploying many separate projects and repos. Managing Macs as standalone build machines is still a huge headache in 2025.
What's wrong with Cirrus CLI and Tart built on Apple's Virtualization.framework?
Tart is great! This is probably the best thing available for now, though it runs into some limitations that Apple imposes for VMs. (Those limitations perhaps hint at why Apple hasn't implemented this-- it seems they don't really want people to be able to rent out many slices of Macs.
One clever and cool thing Tart actually does that sort of relates to this discussion is that it uses the OCI format for distributing OS images!
(It's also worth noting that Tart is proprietary. Some users might prefer something that's either open-source, built-in, or both.)
Same thing containers/jails are useful for on Linux and *BSD, without needing to spin up an entirely separate kernel to run in a VM to handle it.
MacOS apps can already be sandboxed. In fact it's a requirement to publish them to the Mac App Store. I agree it'd be nice to see this extended to userland binaries though.
You can't really sandbox development dependencies in any meaningful way. I want to throw everything and the kitchen sink into one container per project, not install a specific version of Python, Node, Perl or what have you globally/namespaced/whatever. Currently there's no good solution to that problem, save perhaps for a VM.
uv doesn't provide strong isolation; a package you install using uv can attempt to delete random files in your home folder when you import it, for example.
People use containers server side in Linux land mostly... Some desktop apps (flatpak is basically a container runtime) but the real draw is server code.
Do you think people would be developing and/or distributing end user apps via macOS containers?
I might misunderstand the project, but I wish there was a secure way for me to execute github projects. Recently, the OS has provided some controls to limit access to files, etc. but I'd really like a "safe boot" version that doesn't allow the program to access the disk or network.
the firewall tools are too clunky (and imho unreliable).
Orchestrating macOS only software, like Xcode, and software that benefits from Environment integrity, like browsers.
ie: You want to build a binary for macOS from your Linux machine. Right now, it is possible but you still need a macOS license and to go through hoops. If you were able to containerize macOS, then you create a container and then compile your program inside it.
No, that's not at all how that would work. You're not building a macOS binary natively under a Linux kernel.
Okay, the AI stuff is cool, but that "Containerization framework" mention is kinda huge, right? I mean, native Linux container support on Mac could be a game-changer for my whole workflow, maybe even making Docker less of a headache.
FWIW, here are the repos for the CLI tool [1] and backend [2]. Looks like it is indeed VM-based container support (as opposed to WSLv1-style syscall translation or whatever):
Containerization provides APIs to:
[...]
- Create an optimized Linux kernel for fast boot times.
- Spawn lightweight virtual machines.
- Manage the runtime environment of virtual machines.
[1] https://github.com/apple/container
[2] https://github.com/apple/containerizationI'm kinda ignorant about the current state of Linux VMs, but my biggest gripe with VMs is that OS kernels kind of assume they have access to all the RAM the hardware has - unlike the reserve/commit scheme processes use for memory.
Is there a VM technology that can make Linux aware that it's running in a VM, and be able to hand back the memory it uses to the host OS?
Or maybe could Apple patch the kernel to do exactly this?
Running Docker in a VM always has been quite painful on Mac due to the excess amount of memory it uses, and Macs not really having a lot of RAM.
It's still a problem for containers-in-VMs. You can in theory do something with either memory ballooning or (more modern) memory hotplugging, but the dance between the OS and the hypervisor takes a relatively long time to complete, and Linux just doesn't handle it well (eg. it inevitably places unmovable pages into newly reserved memory, meaning it can never be unplugged). We never found a good way to make applications running inside the VM able to transparently allocate memory. You can overprovision memory, and hypervisors won't actually allocate it on the host, and that's the best you can do, but this also has problems since Linux tends to allocate a bunch of fixed data structures proportional to the size of memory it thinks it has available.
That's called memory balooning and is supported by KVM on Linux. Proxmox for example can do that. It does need support on both the host and the guest.
it's not as straightforward a solution as it sounds, though
> Is there a VM technology that can make Linux aware that it's running in a VM, and be able to hand back the memory it uses to the host OS?
Isn't this an issue of the hypervisor? The guest OS is just told it has X amount of memory available, whether this memory exists or not (hence why you can overallocate memory for VMs), whether the hypervisor will allocate the entire amount or just what the guest OS is actually using should depend on the hypervisor itself.
> or just what the guest OS is actually using should depend on the hypervisor itself.
How can the hypervisor know which memory the guest OS is actually using? It might have used some memory in the past and now no longer needs it, but from the POV of the hypervisor it might as well be used.
This is a communication problem between hypervisor and guest OS, because the hypervisor manages the physical memory but only the guest OS known how much memory should actually be used.
A generic vmm can not, but these are specific vmms so they can likely load dedicated kernel mode drivers into the well known guest to get the information back out.
Just looked it up - and the answer is 'baloon drivers', which are special drivers loaded by the guest OS, which can request and return unused pages to the host hypervisor.
Apparently docker for Mac and Windows uses these, but in practice, docker containers tend to grow quite large in terms of memory, so not quite sure how well it works in practice, its certainly overallocates compared to running docker natively on a Linux host.
The short answer is yes, Linux can be informed to some extent but often you still want a memory balloon driver so that the host can “allocate” memory out of the VM so the host OS can reclaim that memory. It’s not entirely trivial but the tools exist, and it’s usually not too bad on vz these days when properly configured.
It’s one reason i don’t like WSL2. When you compile something which needs 30 GB RAM the only thing you can do is terminate the wsl2 vm to get that ram back.
Since late 2023, WSL2 has supported "autoMemoryReclaim", nominally still experimental, but works fine for me.
add:
[experimental] autoMemoryReclaim=gradual
to your .wslconfig
See: https://learn.microsoft.com/en-us/windows/wsl/wsl-config
I just noticed the addition of container cask when I ran b”brew update”.
I chased the package’s source and indeed it’s pointing to this repo.
You can install and use it now on the latest macOS (not 26). I just ran “container run nginx” and it worked alright it seems. Haven’t looked deeper yet.
There’s some problem with networking: if you try to run multiple containers, they won’t see each other. Could probably be solved by running a local VPN or something.
WSLv1 never supported a native docker (AFAIK, perhaps I'm wrong?)
That said, I'd think apple would actually be much better positioned to try the WSL1 approach. I'd assume apple OS is a lot closer to linux than windows is.
This doesn't look like WSL1. They're not running Linux syscalls to the macOS kernel, but running Linux in a VM, more like the WSL2[0] approach.
[0] https://devblogs.microsoft.com/commandline/announcing-wsl-2/...
In the end they're probably run into the same issues that killed WSL1 for Microsoft— the Linux kernel has enormous surface area, and lots of pretty subtle behaviour, particularly around the stuff that is most critical for containers, like cgroups and user namespaces. There isn't an externally usable test suite that could be used to validate Microsoft's implementation of all these interfaces, because... well, why would there be?
Maintaining a working duplicate of the kernel-userspace interface is a monumental and thankless task, and especially hard to justify when the work has already been done many times over to implement the hardware-kernel interface, and there's literally Hyper-V already built into the OS.
Yeah, it probably would be feasible to dust off the FreeBSD Linux compatibility layer[1] and turn that into native support for Linux apps on Mac.
I think Apple’s main hesitation would be that the Linux userland is all GPL.
If they built as a kernel extension it would probably be okay with gpl.
There’s a huge opportunity for Apple to make kernel development for xnu way better.
Tooling right now is a disaster — very difficult to build a kernel and test it (eg in UTM, etc.).
If they made this better and took more of an OSS openness posture like Microsoft, a lot of incredible things could be built for macOS.
I’ll bet a lot of folks would even port massive parts of the kernel to rust for them for free.
My impression is they’re basically trying to end third party kernel development; macOS has been making it progressively more difficult to use kexts and has been providing alternate toolkits for doing things that used to require drivers.
It's impossible to have "native" support for Linux containers on macOS, since the technology inherently relies on Linux kernel features. So I'm guessing this is Apple rolling out their own Linux virtualization layer (same as WSL). Probably still an improvement over the current mess, but if they just support LXC and not Docker then most devs will still need to install Docker Desktop like they do today.
Apple has had a native hypervisor for some time now. This is probably a baked in clone of something like https://mac.getutm.app/ which provides the stuff on top of the hypervisor.
In case you're wondering, the Hypervisor.framework C API is really neat and straightforward:
1. Creating and configuring a virtual machine:
hv_vm_create(HV_VM_DEFAULT);
2. Allocating guest memory: void* memory = mmap(...);
hv_vm_map(memory, guest_physical_address, size, HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC);
3. Creating virtual CPUs: hv_vcpu_create(&vcpu, HV_VCPU_DEFAULT);
4. Setting registers: hv_vcpu_write_register(vcpu, HV_X86_RIP, 0x1000); // Set instruction pointer
hv_vcpu_write_register(vcpu, HV_X86_RSP, 0x8000); // Stack pointer
5. Running guest code: hv_vcpu_run(vcpu);
6. Handling VM exits: hv_vcpu_exit_reason_t reason;
hv_vcpu_read_register(vcpu, HV_X86_EXIT_REASON, &reason);
Thanks for this ! Apple Silicon?
One of the reasons OrbStack is so great is because they implement their own hypervisor: https://orbstack.dev/
Apple’s stack gives you low-level access to ARM virtualization, and from there Apple has high-level convenience frameworks on top. OrbStack implements all of the high-level code themselves.
Better filesystem support (https://orbstack.dev/blog/fast-filesystem) and memory utilization (https://orbstack.dev/blog/dynamic-memory)
Using a hypervisor means just running a Linux VM, like WSL2 does on Windows. There is nothing native about it.
Native Linux (and Docker) support would be something like WSL1, where Windows kernel implemented Linux syscalls.
Hyper-V is a type 1 hypervisor, so Linux and Windows are both running as virtual machines but they have direct access to hardware resources.
It's possible that Apple has implemented a similar hypervisor here.
Surely if Windows kernel can be taught to respond to those syscalls, XNU can be taught it even easier. But, AIUI the Windows kernel already had a concept of "personalities" from back when they were trying to integrate OS/2 so that zero-to-one for XNU could be a huge lift, not the syscalls part specifically
WSL1 didn't use the existing support for personalities in NT
XNU similarly has a concept of "flavors" and uses FreeBSD code to provide the BSD flavor. Theoretically, either Linux code or a compatibility layer could be implemented in the kernel in a similar way. The former won't happen due to licensing.
> the Windows kernel already had a concept of "personalities" from back when they were trying to integrate OS/2 so that zero-to-one for XNU could be a huge lift, not the syscalls part specifically
XNU is modular, with its BSD servers on top of Mach. I don’t see this as being a strong advantage of NT.
Exactly. So it wouldn't necessarily be easier. NT is almost a microkernel.
Yep. People consistently underestimate the great piece of technology NT is, it really was ahead of its time. And a shame what Microsoft is doing with it now.
Was it ahead? I am not sure. There was lots of research on microkernels at the time and NT was a good compromise between a mono and a microkernel. It was an engineering product of its age. A considerably good one. It is still the best popular kernel today. Not because it is the best possible with today's resouces but because nobody else cares about core OS design anymore.
I think it is the Unix side that decided to burry their heads into sand. We got Linux. It is free (of charge or licensing). It supported files, basic drivers and sockets. It got commercial support for servers. It was all Silicon Valley needed for startups. Anything else is a cost. So nobody cared. Most of the open source microkernel research slowly died after Linux. There is still some with L4 family.
Now we are overengineering our stacks to get closer to microkernel capabilities that Linux lacks using containers. I don't want to say it is ripe for disruption becuse it is hard and again nobody cares (except some network and security equipment but that's a tiny fraction).
> Was it ahead? I am not sure.
You say this, but then proceed to state that it had a very good design back then informed by research, and still is today. Doesn't that qualify? :-)
NT brought a HAL, proper multi-user ACLs, subsystems in user mode (that alone is amazing, even though they sadly never really gained momentum), preemptive multitasking. And then there's NTFS, with journaling, alternate streams, and shadow copies, and heaps more. A lot of it was very much ahead of UNIX at the time.
> nobody else cares about core OS design anymore.
Agree with you on that one.
> You say this, but then proceed to state that it had a very good design back then informed by research, and still is today. Doesn't that qualify? :-)
I meant that NT was a product that matched the state of the art OS design of its time (90s). It was the Unix world that decided to be behind in 80s forever.
NT was ahead not because it is breaking ground and bringing in new design aspects of 2020s to wider audiences but Unix world constantly decides to be hardcore conservative and backwards in OS design. They just accept that a PDP11 simulator is all you need.
It is similar to how NASA got stuck with 70s/80s design of Shuttle. There was research for newer launch systems but nobody made good engineering applications of them.
Unix 'died' with plan9/9front, which is far more advanced than Unix v7 for a PDP or a DEC, can't remember.
9front is to Unix was NT it's for VMS.
It is as native as any Linux cloud instance.
> The Containerization framework enables developers to create, download, or run Linux container images directly on Mac. It's built on an open-source framework optimized for Apple Silicon and provides secure isolation between container images
That's their phrasing, which suggests to me that it's just a virtualization system. Linux container images generally contain the kernel.
> Linux container images generally contain the kernel.
No, containers differ from VMs precisely in requiring dependency on the host kernel.
Hmm, so they do. I assumed because you pulled in a linux distro that the kernel was from that distro is used too, but I guess not. Perhaps they have done some sort of improvement where they have one linux kernel running via the hypervisor that all containers use. Still can't see them trying to emulate linux calls, but who knows.
> I assumed because you pulled in a linux distro that the kernel was from that distro is used too,
Thst's how docker works on WSL2, run it on top of a virtualised linux kernal. WSL2 is pretty tightly integrated with windows itself, stil a linux vm though. It seems kinda weird for apple to reinvent the wheel for that kind of thing for containers.
> Thst's how docker works on WSL2, run it on top of a virtualised linux kernal. WSL2 is pretty tightly integrated with windows itself, stil a linux vm though. It seems kinda weird for apple to reinvent the wheel for that kind of thing for containers.
Can't edit my posts mobile but realized that's, what's the word, not useful... But yeah, sharing the kernal between containers but otherwise makes them isolated allegedly allows them to have VMesque security without the overhead of seperate VMs for each image. There's a lot more to it, but you get the idea.
They usually do contain a kernel because package managers are too stupid to realise it’s a container, so they install it anyway.
The screenshot in TFA pretty clearly shows docker-like workflows pulling images, showing tags and digests and running what looks to be the official Docker library version of Postgres.
Every container system is "docker-like". Some (like Podman) even have a drop-in replacement for the Docker CLI. Ultimately there are always subtle differences which make swapping between Docker <> Podman <> LXC or whatever else impossible without introducing messy bugs in your workflow, so you need to pick one and stick to it.
If you've not tried it recently, I suggest give the latest version of podman another shot. I'm currently using it over docker and a lot of the compatibility problems are gone. They've put in massive efforts into compatibility including docker compose support.
Yeah, from a quick glance the options are 1:1 mapped so an
alias docker='container'
Should work, at least for basic and common operationsWhat about macOS being derived from BSD? Isn’t that where containers came from: BSD jails?
I know the container ecosystem largely targets Linux just curious what people’s thoughts are on that.
OS X pulls some components of FreeBSD into kernel space, but not all (and those are very old at this point). It also uses various BSD bits for userspace.
Good read from horse mouth:
https://developer.apple.com/library/archive/documentation/Da...
Thank you—I’ll give that a read. :)
„Container“ is sort of synonymous with „OCI-compatible container“ these days, and OCI itself is basically a retcon standard for docker (runtime, images etc.). So from that perspective every „container system“ is necessarily „docker-like“ and that means Linux namespaces and cgroups.
With a whole generation forgetting they came first in big iron UNIX like HP-UX.
Interesting. My experience w/ HP-UX was in the 90s, but this (Integrity Virtual Machines) was released in 2005. I might call out FreeBSD Jails (2000) or Solaris Zones (2005) as an earlier and a more significant case respectively. I appreciate the insight, though, never knew about HP-UX.
HP-UX Vault, released with HP-UX 10.24, in 1996,
https://en.m.wikipedia.org/wiki/HP-UX
What you searched for is an evolution of it.
Another reason it matters is they might have done it differently which could inspire future improvements. :)
I like to read bibliographies for that reason—to read books that inspired the author I’m reading at the time. Same goes for code and research papers!
Some people think it matters to properly learn history, instead of urban myths.
History is one thing, who-did-it-first is often just a way to make a point in faction debates. In the broader picture, it makes little difference IMHO.
Conceptually similar but different implementations. Containers uses cgroups in Linux and there is also file system and network virtualization as well. It's not impossible but it would require quite a bit of work.
Another really good read about containers, jails and zones.
BSD jails are architected wholly differently from what something like Docker provides.
Jails are first-class citizens that are baked deep into the system.
A tool like Docker relies using multiple Linux features/tools to assemble/create isolation.
Additionally, iirc, the logic for FreeBSD jails never made it into the Darwin kernel.
Someone correct me please.
> BSD jails are architected wholly differently from what something like Docker provides. > Jails are first-class citizens that are baked deep into the system.
Both very true statements and worth remembering when considering:
> Additionally, iirc, the logic for FreeBSD jails never made it into the Darwin kernel.
You are quite correct, as Darwin is is based on XNU[0], which itself has roots in the Mach[1] microkernel. Since XNU[0] is an entirely different OS architecture than that of FreeBSD[3], jails[4] do not exist within it.
The XNU source can be found here[2].
0 - https://en.wikipedia.org/wiki/XNU
1 - https://en.wikipedia.org/wiki/Mach_(kernel)
2 - https://github.com/apple-oss-distributions/xnu
3 - https://cgit.freebsd.org/src/
4 - https://man.freebsd.org/cgi/man.cgi?query=jail&apropos=0&sek...
Thank you for the links I will take a closer look at XNU. It’s neat to see how these projects influence each other.
> Thank you for the links I will take a closer look at XNU.
Another great resource regarding XNU and OS-X (although a bit dated now) is the book:
Mac OS X Internals
A Systems Approach[0]
0 - https://openlibrary.org/books/OL27440934M/Mac_OS_X_InternalsThis is great! Thank you!
> what something like Docker provides
Docker isn't providing any of the underlying functionality. BSD jails and Linux cgroups etc aren't fundamentally different things.
Jails were explicitly designed for security, cgroups were more generalized as more about resource control, and leverages namespaces, capabilities, apparmor/SELinux to accomplish what they do.
> Jails create a safe environment independent from the rest of the system. Processes created in this environment cannot access files or resources outside of it.[1]
While you can accomplish similar tasks, they are not equivalent.
Assume Linux containers are jails, and you will have security problems. And on the flip side, k8s pods share UTM,IPC, Network namespaces, yet have independent PID and FS namespaces.
Depending on your use case they may be roughly equivalent, but they are fundamentally different approaches.
[1] https://freebsdfoundation.org/freebsd-project/resources/intr...
WSL throughput is not enough for file intensive operations. It is much easier and straightforward to just delete windows and use Linux.
Unless you need to have a working video or audio config as well.
Using the Linux filesystem has almost no performance penalty under WSL2 since it is a VM. Docker Desktop automatically mounts the correct filesystem. Crossing the OS boundary for Windows files has some overhead of course but that's not the usecase WSL2 is optimized for.
With WSL2 you get the best of both worlds. A system with perfect driver and application support and a Linux-native environment. Hybrid GPUs, webcams, lap sensors etc. all work without any configuration effort. You get good battery life. You can run Autodesk or Photoshop but at the same time you can run Linux apps with almost no performance loss.
FWIW I get better battery life with ubuntu.
Are you comparing against the default vendor image that's filled with adware or a clean Windows install with only drivers? There is a significant power use difference and the latter case has always been more power efficient for me compared to the Linux setup. Powering down Nvidia GPU has never fully worked with Linux for me.
How? What's your laptop brand and model? I've never had better battery life with any machine using ubuntu.
If they implemented the Linux syscall interface in their kernel they absolutely could.
Aren't the syscalls a constant moving target? Didn't even Microsoft fail at keeping up with them in WSL?
Linux is exceptional in that it has stable syscall numbers and guarantees stability. This is largely why statically linked binaries (and containers) "just work" on Linux, meanwhile Windows and Mac OS inevitably break things with an OS update.
Microsoft frequently tweaks syscall numbers, and they make it clear that developers must access functions through e.g. NTDLL. Mac OS at least has public source files used to generate syscall.h, but they do break things, and there was a recent incident where Go programs all broke after a major OS update. Now Go uses libSystem (and dynamic linking)[2].
arm64 macOS doesn't even allow statically linked binaries at all.
on the windows side, syscall ABI became stable since Server 2022 to run mismatched container releases
Not Linux syscalls, they are a stable interface as far as the Linux kernel is concerned.
They're not really a moving target (since some distros ship ancient kernels, most components will handle lack of new syscalls gracefully), but the surface is still pretty big. A single ioctl() or write() syscall could do a billion different things and a lot of software depends on small bits of this functionality, meaning you gotta implement 99% of it to get everything working.
FreeBSD and NetBSD do this.
They didn't.
WSL doesn't have a virtualization layer, WSL1 did have but it wasn't a feasible approach so WSL2 is basically running VMs with the Hyper-V hypervisor.
Apple looks like it's skipped the failed WSL1 and gone straight for the more successful WSL2 approach.
I installed Orbstack without Docker Desktop.
WSL 1.0, given that WSL 2.0 is regular Linux VM running on HYPER-V.
It would probably be slower than just running a VM.
> Meet Containerization, an open source project written in Swift to create and run Linux containers on your Mac. Learn how Containerization approaches Linux containers securely and privately. Discover how the open-sourced Container CLI tool utilizes the Containerization package to provide simple, yet powerful functionality to build, run, and deploy Linux Containers on Mac.
> Containerization executes each Linux container inside of its own lightweight virtual machine.
That’s an interesting difference from other Mac container systems. Also (more obvious) use Rosetta 2.
Podman Desktop, and probably other Linux-containers on macOS tools, can already create multiple VMs, each hosting a subset of the containers you run on your Mac.
What seems to be different here, is that a VM per each container is the default, if not only, configuration. And that instead of mapping ports to containers (which was always a mistake in my opinion), it creates an externally routed interface per machine, similar to how it would work if you'd use macvlan as your network driver in Docker.
Both of those defaults should remove some sharp edges from the current Linux-containers on macOS workflows.
The ground keeps shrinking for Docker Inc.
They sold Docker Desktop for Mac, but that might start being less relevant and licenses start to drop.
On Linux there’s just the cli, which they can’t afford to close since people will just move away.
Docker Hub likely can’t compete with the registries built into every other cloud provider.
There is already a paid alternative, Orbstack, for macOS which puts Docker for Mac to shame in terms of usability, features and performance. And then there are open alternatives like Colima.
Use OrbStack for sometime, made my dev team’s m1 run our kubernetes pods in a much lighter fashion. Love it.
Podman works absolutely beautifully for me, other platforms, I tripped over weird corner cases.
That is why they are now into the reinventing application servers with WebAssembly kind of vibe.
It’s really awful. There’s a certain size at which you can pivot and keep most of your dignity, but for Docker Inc., it’s just ridiculous.
They got Sherlocked.
It's cool but also not as revolutionary as you make it sound. You can already install Podman, Orbstack or Colima right? Not sure which open-source framework they are using, but to me it seems like an OS-level integration of one of these tools. That's definitely a big win and will make things easier for developers, but I'm not sure if it's a gamechanger.
All those tools use a Linux VM (whether managed by Qemu or VZ) to run the actual containers, though, which comes with significant overhead. Native support for running containers -- with no need for a VM -- would be huge.
there's still a VM involved to run a Linux container on a Mac. I wouldn't expect any big performance gains here.
Still needs a VM. It'll be running more VMs than something like orbstack, which I believe runs just one for the docker implementation. Whether that means better or worse performance we'll find out.
Yes, it seems like it's actually a more refined implementation than what currently exists. Call me pleasantly surprised!
The framework that container uses is built in Swift and also open sourced today, along with the CLI tool itself: https://github.com/apple/containerization
It looks like nothing here is new: we have all the building blocks already. What Apple done is packaged it all nicely, which is nothing to discount: there's a reason people buy managed services over just raw metal for hosting their services, and having a batteries included development environment is worth a premium over the need to assemble it on your own.
The containerization experience on macOS has historically been underwhelming in terms of performance. Using Docker or Podman on a Mac often feels sluggish and unnecessarily complex compared to native Linux environments. Recently, I experimented with Microsandbox, which was shared here a few weeks ago, and found its performance to be comparable to that of native containers on Linux. This leads me to hope that Apple will soon elevate the developer experience by integrating robust containerization support directly into macOS, eliminating the need for third-party downloads.
yeah -- I saw it's built on "open source foundations", do you know what project this is?
My guess is Podman. They released native hypervisor support on macOS last year. https://devclass.com/2024/03/26/podman-5-0-released-with-nat...
My guess is nerdctl and containerd.
The CLI sure looks a lot like Docker.
If I had to guess, colima? But there are a number of open source projects using Apple's virtualisation technologies to run a linux VM to host docker-type containers.
Once you have an engine podman might be the best choice to manage containers, or docker.
Being able to drop Docker Desktop would be great. We're using Podman on MacOS now in a couple places, it's pretty good but it is another tool. Having the same tool across MacOS and Linux would be nice.
Migrate to Orbstack now, and get a lot of sanity back immediately. It’s a drop-in replacement, much faster, and most importantly, gets out of your way.
There's also Rancher Desktop (https://rancherdesktop.io/). Supports moby and containerd; also optionally runs kubernetes.
I have to drop docker desktop at work and move to podman.
I'm the primary author of amalgamation of GitHub's scripts to rule them all with docker compose so my colleagues can just type `script/setup` and `script/server` (and more!) and the underlying scripts handle the rest.
Apple including this natively is nice, but I won't be a able to use this because my scripts have to work on linux and probably WSL
Orbstack
Seems to be this: https://github.com/apple/containerization
vminitd is the most interesting part of this.
Colima is my guess, only thing that makes sense here if they are doing a qemu vm type of thing
That's my guess too... Colima, but probably doing a VM using the Virtualization framework. I'll be more curious if you can select x86 containers, or if you'll be limited to arm64/aarch64. Not that it really makes that much of a difference anymore, you can get pretty far with Linux Arm containers and VMs.
Should be easy enough, look for the one with upstream contributions from Apple.
Oh, wait.
Well, Orbstack isn't really anything special in terms of its features, it's the implementation that's so much better than all the other ways of spinning up VMs to run containers on macos. TBH, I'm not 100% sure 2025 Apple is capable anymore of delivering a more technically impressive product than orbstack ...
I thought it's more like Colima than OrbStack
Microsoft did it first to Virtual Box / VMWare Workstation thought.
That is what I have been using since 2010, until WSL came to be, it has been ages since I ever dual booted.
Orbstack has been pretty bulletproof
Orbstack is not free for commercial use
Orbstack owners are going to be fuming at this news!
I’ve been using Colima for a long while with zero issues, and that leverages the older virtualization framework.
WSL 2 involves a VM. WSL 1, which is still maintained and usable, doesn't.
https://learn.microsoft.com/en-us/windows/wsl/compare-versio...
Ok, I've squeezed containerization into the title above. It's unsatisfactory, since multiple announced-things are also being discussed in this thread, but "Apple's kitchen-sink announcement from WWDC this year" wouldn't be great either, and "Apple supercharges its tools and technologies for developers to foster creativity, innovation, and design" is right out.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
Title makes sense to me.
It seems like a big step in the right direction to me. It's hard to tell if its 100% compatible with Docker or not, but the commands shown are identical (other than swapping docker for container).
Even if its not 100% compatible this is huge news.
> Apple Announces Foundation Models and Containerization frameworks, etc.
This sounds like apple announced 2 things, AI models and container related stuff I'd change it to something like:
> Apple Announces Foundation Models, Containerization frameworks, more tools
The article says that what was announced is "foundation model frameworks", hence the awkward twist in the title, to get two frameworkses in there.
Small nitpick but "Announces" being capitalized looks a bit weird to me.
It’s title case[0].
They labeled it a nitpick. Seems fair.
Me too - I thought I'd fixed that! Fixed now, thanks.