
In my previous post I talk about how I got rid of hundreds of thousands of lines of Objective-C code while at Audible and I explain why keeping Objective-C code around is a terrible idea. And I…
In my previous post I talk about how I got rid of hundreds of thousands of lines of Objective-C code while at Audible and I explain why keeping Objective-C code around is a terrible idea. And I explain that…
I’m not stuck in the old ways; I’m not the guy insisting on the supremacy of Objective-C despite the obvious evidence against. I’m the guy who got rid of Objective-C — with glee and (oops, sorry Audible marketing team for the screwup) wild abandon!
Then of course I wrote some Objective-C code recently and really, really loved it.
I wanted to replace my homegrown static website/blog generator because I no longer wanted to use the language it was written in, for reasons.
I took it as an opportunity to learn Python — but it turned out that my heart wasn’t in it (not Python’s fault; great language) and I ended up screwing it up. (See Blog Fuckup).
I thought about some alternatives: Swift, which I know well; Rust and Go, which would have the advantage of helping me branch out from the Apple ecosystem; and good old C, my happy-go-lucky friend who still sprints faster than every brash new language.
Of those I was leaning toward C because speed is an issue. I wanted to make rendering this blog, over 25 years old and with thousands of posts, to happen in under one second. The system I was replacing took a few seconds. But I wanted more speed (personality flaw).
And then I thought, I swear just for a split second, about how great it would be if C had something a little nicer than C structs for modeling my app’s data — and oh well too bad there’s nothing like that.
And then I remembered Objective-C, which is C plus some things a little nicer than C structs. 🎩🦖
Anyone new to Objective-C thinks it’s difficult and maybe a bit harsh because [[those squareBrackets] lookInsane:YES].
Once you get past that, which takes a day or two given a good-faith effort, you’ll realize how small a language it is, how easy to hold in your palm and turn around and understand all sides of it. And you’ll appreciate how easy it is to make good decisions when you don’t have a surplus of language features to choose from.
And you’d be reassured to know that Objective-C is probably never going to change, which means tech debt will accumulate much more slowly than with newer languages. (Unless, of course, you count Objective-C itself as tech debt. You don’t have to, though.)
It’s a cliché to call Objective-C a more elegant weapon for a more civilized age. It’s better thought of, these days, as a loaded footgun.
But I did absolutely love writing this code! So much fun. And now I’ve got another little thing brewing, also in Objective-C, coming soon-ish.
PS The website/blog generator app is called SalmonBay. I really don’t expect anyone else in the world to use it, and I expect no contributions, but it is available as open source. (I put it on Codeberg, for reasons.)
PPS SalmonBay does a clean build of this blog in under a second. 🎸
Every so often I get weirdly obsessed with Objective-J, which "has the same relationship to JavaScript as Objective-C has to C". It is (was?) an absolutely bonkers project. I think it has more or less died since 280 North was acquired.
Didn't expect to see cappuccino mentioned ever again. It was so wild, you can use AppKit documentation for cappuccino. Apps were so pretty and yet so fast.
I remember back in 2009 I really liked their coffee machine icon. I emailed the devs, they referred me to some design studio, and then to my surprise they replied and said that it's Francis Francis X1. Now I'm looking at it in my home office.
Same. I remember when this first came up and I was like "this is so weirdly interesting."
Sad that they got acquired because it was just fascinating what they were doing, even if I was never going to use it.
Holy shit, it’s still being actively developed and maintained https://github.com/cappuccino/cappuccino
More amazingly the guy doing the most recent maintaining[1] is a medical Professor at Freiburg Uni.
And wow, it's basically a web version of Cocoa! Check this out: https://ansb.uniklinik-freiburg.de/ThemeKitchenSinkA3/
Anyone know if this is or ever was the basis for Apple's iCloud web apps on iCloud.com (e.g. Keynote / Pages / Notes etc.)? Those apps are heroic attempts to replicate the desktop app experience in the browser. I'm curious what web framework is underlying it. Side note - if I could install 3rd party apps w/ similar UIs in my iCloud dashboard that would be interesting.
I think originally Apple was using SproutCore, which had similar aspirations to produce "desktop quality" web apps, and was one of the early frameworks to implement things like two-way data binding. This was back when iCloud was called MobileMe.
SproutCore 2.0 became Ember.js 1.0, but I don't know if Apple are still using it.
Oh wow I didn’t know that’s where Ember came from.
Cappuccino was not an Apple project, so I doubt that is what Apple used to develop those projects. That, and 280 North eventually got acquired by Motorola.
Yes, after which they announced they were canning their "Atlas" project, which was meant to be an Interface Builder for the web. Motorola decided they wanted to keep the technology in house.
No idea if they ever did anything with it!
Best I can tell, it turned into Google Web Designer!
I'm on the outside, but best I can tell:
- You're thinking of a UI design tool called "Ninja"
- Google purchased Motorola Mobility and the Ninja project got cancelled
- Google launched Google Web Designer, that basically had an almost identical UI. As far as I can tell the internals are different, but probably shared some code or at least design work.
Possibly! The tool was definitely named Atlas when it was going to be an open source tool made by 280 North [1]. But it could have been renamed Ninja after Moto acquired it.
[1] https://arstechnica.com/gadgets/2009/03/atlas-a-visual-ide-f...
I really miss Objective-C, and in the world of Swift craziness [1] I'm reminded often of this blog post [2] wondering what would have happened if Apple hadn't encountered Second System Syndrome for its recommended language.
(There's a decent argument it encountered it in iOS and macOS too.)
[1] https://github.com/swiftlang/swift-evolution/blob/main/propo... -- apologies to the authors, but even as a previous C++ guy, my brain twisted at that. Inside Swift is a slim language waiting to get out... and that slim language is just a safer Objective C.
[2] https://medium.com/goodones/pareto-optimal-apple-devtools-b4...
Obj-C’s simplicity can be nice, but on the other hand I don’t miss having to bring in a laundry list of CocoaPods to have features that are standard in Swift. I don’t miss maintaining header files or having to operate in old codebases that badly manage Obj-C’s looseness either.
You can simply import Swift packages and either they expose an Objc interface or you can provide it yourself – by wrapping it in Swift and exposing the things you need via @objc etc. You can - at the same time - also hide the wrapped framework and make it an impl. detail of the wrapper so that you could switch the wrapped framework while keeping your app code mostly stable. This also reduces the number of imported symbols… import 3rdPartyFramework imports all symbols, extensions, classes, types etc. vs. import MyWrapper only brings in the things you really need.
I don't miss @ and [] all over the place, even if Objective-C has some cool ideas into it.
I do agree Swift's design has gone a bit overbord, we wanted Delphi and got Haskell instead.
However note the same phenomen happening with other languages, as soon as you have a team being paid to develop a language, their job depends on adding features in every single release.
Programming languages are products, even those that praise C's simplicity have certainly not read compiler manuals about language extensions, or the mailings from WG14 proposals.
> we wanted Delphi and got Haskell instead
Please elaborate.
> However note the same phenomen happening with other languages, as soon as you have a team being paid to develop a language, their job depends on adding features in every single release.
Users also request those features. You said yourself that programming languages are products. In that sense, people are always evaluating them through the lenses of utility (the economics concept), and if they have to pick between two languages, with similar capabilities, they will pick up the one that maximises that utility.
This to weird design decisions getting inserted into the fabric as a consequence (the current state of C++ comes to mind). And given developers are too opinionated about everything, we get politics as a side effect.
> Please elaborate.
Ideally Swift would have been something with compile speed of Delphi, its RAD capabilities, strong typing with a good enough type system to support the transparent migration path from Objective-C, and that was it.
Instead we have quite a few type systems ideas going back and forth, with some hard changes across language versions, as if playing with Haskell type system, and GHC feature flags.
> Users also request those features.
Some users request those features, most of them come from team themselves roadmap, regarding what cool features to add next.
Then as politics get into the game, naturally the process of what features land into the stable implementation, and what fail by the wayside depends pretty much how they get pushed into adoption.
>[1] https://github.com/swiftlang/swift-evolution/blob/main/propo... -- apologies to the authors, but even as a previous C++ guy, my brain twisted at that. Inside Swift is a slim language waiting to get out... and that slim language is just a safer Objective C.
These kinds of features are not intended for use in daily application development. They're systems-language features designed for building high performance, safe, very-low-level code. It will be entirely optional for the average Swift developer to learn how to use these features, just in the same way that it's optional for someone to learn Rust.
The "Swift has too many keywords now" meme makes me want to go insane. The vast majority of Swift code never runs into any of that stuff; so, what advocates of it are saying is in effect "we don't want Swift to expand into these new areas (that it has potential to be really good at) even if it's in a way that doesn't affect current uses at all."
That said, the Swift 6 / Strict Concurrency transitions truly have been rough and confusing. It's not super clear to me that much of it could have been avoided (maybe if the value of Approachable Concurrency mode had been understood to be important from the beginning?), and the benefits are real, but my gut feeling is that a lot of the "Swift is too complicated" stuff is probably just misplaced annoyance at this.
Swift's concurrency story is what happens when a multi-year project meets Apple's fixed six month Swift release timeline. And being written by highly knowledgeable but low level engineers who've never written an iOS app in their life, means that there was a huge approachability hole they've only recently worked their way out of, but even that has major issues (MainActor default on in Xcode but not Swift itself).
Such a mess. You can tell the people that designed it never wrote a client or an app in their lives. It is pure academic pendatry in display.
I go back and forth. I do miss the simplicity of objc at times though. I think in a short amount of time someone can become close to an expert in objc. Swift is already incredibly complicated and there's no end in sight.
A few years from now O'reilly will publish a bestseller called Swift: The Good Parts
I hate how pendantic and useless some of the features of swift being pushed down by academics that don't write apps or services themselves.
Simple example:
Objective-C
if myObject {
}
in swift if myObject != nil {
}
Also opitionals in swift could have totally be avoided if they adopted a prototype based langue (basically object are never nil). Lua did this, and it is very elegant
But meanwhile, we got a half backed optional system, which is backwards (similiar to Java), and didn't help with the practicality of the language at all, and meanwhile you still can crash an app doing myArray[1]
I love Obj-C, but the Swift version isn't as bad as you say:
if let myObject {
// myObject is non-nil in here
}
The Swift version is also usingfirst-class optionals. In Obj-C there is very small chance you'll confuse `NULL` with `nil` or `0`. Or that you'll message `nil` resulting in `nil`.. and in well-built software you have to guard against that.Aside: Obj-C is narrowly focused on adding objects (in the Smalltalk sense) to C whereas Swift is trying to deliver a compiler and language with memory safety _guarantees_... Turns out that means you need a lot more language. Not to mention the `async` syntax/feature explosion.
Obj-C is "hippie" and Swift is "corporate suit" + "we're doing serious work here!"
Finally I want to say: I believe Obj-C was a huge competitive advantage and secret weapon that let Apple deliver an OS with so much more built-in functionality than any competitor for years and years. (Obj-C is great for system APIs) That's under-appreciated.
> you still can crash an app doing myArray[1]
the first thing i do when starting a new project: extension Array {
subscript(safe: Int) -> Element? { ... }
}
there was talk in the swift forms about adding that
as standard that but it seems to have died off...[0] https://forums.swift.org/t/draft-adding-safe-indexing-to-arr...
Even with that there is nothing from you accidentally using [i]. Also there are just a ton of Swift APIS and bridge API that take an index and then crash… for full coverage you would need hundreds of safe wrappers… (doing what you propose though at least gives you. Some peace of mind..
Also Swift has a lot of other areas where it just lacks any safeguards… Memory issues are still a thing. It’s using ARC under the hood after all.
Infinite recursion is still a thing (not sure if this would even detectable - probably not).
Misuse of APIs.
And it introduces new issues: which methods are being called depends on your imports.
In my experience Swift lulls you into a false sense of safety while adding more potential safety issues and “only” solving some of the less important ones. objc has null ability as well. Which can warn u if used appropriately. objc also has lightweight generics. In practice this is all you need.
> And it introduces new issues: which methods are being called depends on your imports.
also depending on how you casted it, it will call the method of the cast, not the actual one in the instance (which completely caught be off-guard when i started swift) > objc also has lightweight generics. In practice this is all you need.
i feel this too sometimes; sometimes simple really is best... tho i think some of these decisions around complexity is to allow for more optimization so c++ can be chucked away at some point...I never have to miss Objective-C because I still write in it! I never hopped on the Swift train because I saw it as an inferior language to Objective-C. And a decade later I'm more sure of that. It's such a joy to still write in an amazing language. Yes I use Python too and that is a joy. But nothing like Objective-C.
> Inside Swift is a slim language waiting to get out... and that slim language is just a safer Objective C.
Rust? Rust is basically a simpler Swift. The objective-c bindings are really nice too, and when you're working with obj-c you don't have have worry about lifetimes too much, because you can lean on the objective-c runtime's reference counting.
I think the way to think about it is that with Rust, it's as if all the goodness in Swift was implemented with in the "C" level, and the Objective-C but is still just library-level a runtime layer on top. Whereas Swift brings it's own runtime which greatly complicates things.
I would absolutely not call Rust a simpler Swift. Swift doesn't have and ownership/borrowing system, explicit lifetime for objects, much more expressive (and therefore complex) macro support...
I get that there's a tradeoff. Rust requires you to be way more explicit about what you're intending upfront and that can, in the long term, lead to simpler code -- but there's no dimension (depth-wise or breadth-wise) that I'd call Rust simpler.
> I would absolutely not call Rust a simpler Swift. Swift doesn't have and ownership/borrowing system
Swift already does have those things but unlike Rust, they are opt-in.
Not going to argue which language is simpler, but sorry, you don't seem like someone who knows Swift very well.
While Swift now has the `borrowing` and `consuming` keywords, support for storing references is nonexistent, and the only way to return/store `Span`s, etc, is only possible through using experimental `@lifetime` annotations.
Swift is a nice language, and it's new support for the bare necessity of affine types is a good step forward, but it's not at all comparable with Rust.
Except the entire design of swift is meant to make everything more automated.
* automated exclusivity with value types and value witness tables, classes as arc types (ie Arc<Mutex<T>>)
* automated interop with C/C++/Obj-C through the clang ast importer
Maybe they could have started with rust and added on what they needed, but why not build a new language at that point where things so fundamental are involved?
Source: I worked in lattners org at the time of swifts inception (on an unrelated backend) but that was the motivation. I also worked on the swift compiler for a little bit some years later on in my career.
> Maybe they could have started with rust and added on what they needed
Unlikely, I think, because of timelines. Swift’s first public release was in June 2014. Rust is a few years older (first public release in January 2012), but that wasn’t the rust we have today. It still had garbage collection, for example (https://en.wikipedia.org/wiki/Rust_(programming_language)#20...)
Rust is still more complicated than Swift, but you needn't worry - the Swift team is flexing their muscles hard to ensure that Swift becomes the biggest, most complicated language on Earth and wins the complexity, cognitive burden and snail performance once and for all eternity. Their compiler already times out on the language, soon even an M7 will also give up.
One of my recurring language design hot takes is that it's easier to design for speed and then make it easy to use than it is to make it easy to use and then try to speed it up.
C++ is trying to make C easier to use for 40 years, and it's still not there. So I wouldn't call that easier.
It has been there for me since 1993, in every single scenario as alternative to C, when the choice boils down to either of them.
Since 1993, I only have used C when required to do for various reasons out of my control, or catching up with WG14 standards.
how would you write something like
#include <print>
#include <map>
#include <string>
int main(int argc, char** argv)
{
using namespace std::literals;
std::string foo = "foo:";
foo += argv[0];
std::map<std::string, int> m{
{foo, 123}
, {"count: "s + std::to_string(argc), 456}
};
std::println("{}", m);
}
in CEven better example,
import std;
int main(int argc, char** argv)
{
using namespace std::literals;
auto foo = "foo:"s;
foo += argv[0];
std::map<std::string, int> m{
{foo, 123}
, {"count: "s + std::to_string(argc), 456}
};
std::println("{}", m);
}Sure, there are nice parts of C++. And there are also brain-dead parts that add needless complexity such as:
* rvalue references
* the difference between auto, decltype, typeof
* unreadable template monstrosities
* various different flavors of "smart" pointer
* the continued existence of footguns relating to UB, dangling pointers, unexpected temporary lifetimes, etc
* total absence of a build system or package management
* legacy APIs that still take raw pointers
* concepts, a half-assed attempt at generic constraints
No worries, you get some of those in C23, and C2y.
Where is C's build system and package management?
Yeah, because using _Generic alongside typeof and preprocessor macros isn't half-assed attempt at generics.
You misunderstood. Perpetuating C's weaknesses, and then adding additional complexity, is not a defense.
It kind of is, when the goal was to be TypeScript for C, before this was even a concept.
Now ideally we would all be using Modula-2, Ada, Delphi, VB, C#,.... and co, but given that even C compilers are nowadays written in C++, we make do with what we have, while avoiding C flaws as much as possible.
C++ if any made C user friendly.
C++ is trying to make something EASIER to use?
Examples?
I recently started writing for macOS in Swift and, holy hell, the debuggability of the windowing toolkits is actually unparalleled. I've never seen something that is this introspectable at runtime, easy to decompile and analyze, intercept and modify, etc. Everything is so modular, with subclassing and delegation patterns everywhere. It seems all because of the Objective-C runtime, as without it you'd end up needing something similar anyway.
You can reach into built-in components and precisely modify just what you want while keeping everything else platform-native and without having to reimplement everything. I've never seen anything like this before, anywhere. Maybe OLE on Windows wanted to be this (I've seen similar capabilities in REALLY OLD software written around OLE!) but the entirety of Windows' interface and shell and user experience was never unified on OLE so its use was always limited to something akin to a plugin layer. (In WordPad, for example)
The only thing that even seems reminiscent is maybe Android Studio, and maybe some "cross-platform" toolkits that are comparatively incredibly immature in other areas. But Android Studio is so largely intolerable that I was never able to dig very far into its debugging capabilities.
I feel like I must be in some sort of honeymoon phase but I 100% completely understand now why many Mac-native apps are Mac-native. I tried to write a WinUI3 app a year or two ago and it was a terrible experience. I tried to get into Android app development some years ago and it was a terrible experience. Writing GUIs for the Linux desktop is also a terrible experience. But macOS? I feel like I want to sleep with it, and I weep for what they've done with liquid glass. I want the perfection that led to Cocoa and all its abstractions. Reading all the really, super old documentation that explains entire subsystems in amazingly technical depth makes me want to SCREAM at how undocumented, unpolished and buggy some of the newer features have gotten.
I've never seen documentation anything like that before, except for Linux, on Raymond Chen's blog, and some reverse-engineering writeups. I do love Linux but its userspace ecosystem just is not for me.
Maybe this is also why Smalltalk fiends are such fans. I should really get into that sometime. Maybe Lisp too.
Writing objective-c code for mac os GUI apps was one of those things that finally made "interfaces"/"protocols" really click for me as a young developer. Just implement (some, not even all) method in "FooWidgetDelegate", and wire your delegate implementation into the existing widget. `willFrobulateTheBar` in your delegate is called just before a thing happens in the UI and you can usually interfere or modify with the behavior before the UI does it. Then `didFrobulateTheBar` is called after with the old and new values or whatever other context makes sense and you can hook in here for doing other updates in response to the UI getting an update. If you don't implement a protocol method, the default behavior happens, and preserving the default behavior is baked into the process, so you don't have to re-implement the whole widget's behavior just to modify part of it.
It's probably one of the better UI frameworks I think I've used (though admittedly a lot of that also is in part due to "InterfaceBuilder" magic and auto-wiring. Still I often wish for that sort of elegant "billions of hooks, but you only have to care about the ones you want to touch" experience when I've had to use other UI libraries.
> I feel like I must be in some sort of honeymoon phase but I 100% completely understand now why many Mac-native apps are Mac-native.
it seems like everybody prefers ios, but i really still think after all these years i prefer appkit; it really is so well documented and the quality of the api is the best i've seen by a long mile> Reading all the really, super old documentation that explains entire subsystems in amazingly technical depth
Any links?
> Maybe this is also why Smalltalk fiends are such fans.
I started getting interested in Smalltalk after I tried writing a MacOS program by calling the Objective-C runtime from Rust and had a surprisingly good time. A Smalltalk-style OO language feels like a better base layer for apps than C.
> Any links?
For example, this guide I was reading just earlier: https://developer.apple.com/library/archive/documentation/Co...
Generally everything in that documentation archive is absolutely amazing. I don't know why it's an archive; presumably they laid off or reassigned the entire team working on it and there will be no more. The closest thing today would probably be Technotes: https://developer.apple.com/documentation/technotes
> Writing GUIs for the Linux desktop is also a terrible experience.
I've found the DX for GTK to be at least tolerable. Not fantastic, but I can at least look at a particular API, guess how the C-based GObject code gets translated by my language bindings of choice, and be correct more often than not. The documentation ranges from serviceable to incomplete, but I can at least find enough discussion online about it to get it to do what I want.
Also, GTK apparently ships with a built-in inspector tool now. Ctrl-Shift-I in basically any GTK app opens it. That alone is extremely useful, and you basically have to do nothing to get it. It's free.
I've never tried Qt. The applications that use it always seem off to me.
As for OLE, you're actually thinking of COM, not OLE. They were co-developed together: COM is a cross-language object system (like GObject), while OLE is a set of COM interfaces for embedding documents in other arbitrary documents. Like, if you want to put a spreadsheet into a Word document, OLE is the way you have to do that. Microsoft even built much of IE[0] on top of OLE to serve as its extension mechanism.
OLE is dead because its use case died. Compound documents as a concept don't really work in the modern era where everything is same-origin or container sandboxed. But COM is still alive and well. It's the glue that holds Windows together - even the Windows desktop shell. All the extension interfaces are just COM. The only difference is that now they started packaging COM objects and interfaces inside of .NET assemblies and calling it "WinRT". But it's the same underlying classes. If you use, say, the Rust windows crate, you're installing a bunch of language bindings built from WinRT metadata that, among other things, call into COM classes that have been there for decades.
Mac apps are Mac native because Apple gives enough of a shit about being visually consistent that anyone using a cross-platform widget toolkit is going to look out of place. Windows abandoned the concept of a unified visual identity when Windows 8 decided to introduce an entirely new visual design built around an entirely new[1] widget toolkit, with no consideration of how you'd apply any of that to apps using USER.dll/Common Controls. As it stands today, Windows does not have a good answer to "what widget toolkit do I use to write my app", and even Microsoft's own software teams either write their own toolkits or just use Electron.
[0] Petition to rename ActiveX to WebOLE
[1] OK, yes, XAML existed in the Vista era, but that was .NET only, and XAML apps didn't look meaningfully different from ones building their own USER.dll window classes like it's 1993.
> As for OLE, you're actually thinking of COM, not OLE. They were co-developed together: COM is a cross-language object system (like GObject), while OLE is a set of COM interfaces for embedding documents in other arbitrary documents. Like, if you want to put a spreadsheet into a Word document, OLE is the way you have to do that. Microsoft even built much of IE[0] on top of OLE to serve as its extension mechanism.
Oops, you are right about COM. I got them mixed up because I was thinking of the integration in WordPad.
> Mac apps are Mac native because Apple gives enough of a shit about being visually consistent that anyone using a cross-platform widget toolkit is going to look out of place. Windows abandoned the concept of a unified visual identity when Windows 8 decided to introduce an entirely new visual design built around an entirely new[1] widget toolkit, with no consideration of how you'd apply any of that to apps using USER.dll/Common Controls. As it stands today, Windows does not have a good answer to "what widget toolkit do I use to write my app", and even Microsoft's own software teams either write their own toolkits or just use Electron.
Mac apps are Mac native because the APIs are amazing and the ROI can be really really good. It takes so much effort to do the same from scratch, especially cross-platform, that, you're right, I can smell anything written in Qt (because the hitboxes and layout are off) or GTK (because the widget rendering is off).
With that said though, wxWidgets seems to translate EXTREMELY well to macOS, though last I used it, it didn't have good support for Mojave's dark mode. Maybe support is better nowadays. For example, Audacity appears to me as just a crammed Mac-native app rather than blatant use of a cross-platform toolkit, and wxPython used well can be completely mistaken for fully native.
wxWidgets calls the underlying native controls directly; Qt uses it to inform how to render but still does its own thing, at least according to a discussion I had with a Qt engineer some years back.
(I am open to being corrected)
wxWidgets has properly supported dark mode for a bit now.
9front can mount old DOC/XLS documents as OLE 'filesystems' first and then extract the tables/text from them.
As for sandboxing, 9front/plan9 uses namespaces, but shared directories exist, of course. That's the point on computing, the user will want to bridge data in one way or another. Be with pipes, with filesystems/clipboard (or a directory acting as a clipboard with objects, which would be the same in the end).
Welcome to Smalltalk, Lisp, Java and .NET, which alongside NeXTSTEP/OS X, share a common linage of tooling ideas, and programming language features.
Hence why given the option I rather stay in such environments.
Now Android Studio is the product of Google's mess, and I am glad to have moved away from Android development, it also doesn't have anything to do with enjoying pure Java development on desktop (Swing, SWT, JavaFX) and server.