
Thinking of using AI for your Shopify 3D models? Read this first. We compare AI-generated 3D 'slop' vs. handcrafted models to show why the human touch is very much required to attain positive ROI.
Humans are starting to nail the art of AI slop identification in text, images, and video. But what about 3D? We create 3D product configurators for e-commerce brands, and we've been asked many times why we're not using AI to generate the product models we use.
While LLMs can write code and diffusion models can win art competitions, the 3D generation landscape remains extremely... bumpy.
These generated assets feature a deceptive level of "good enough" at a glance, but suffer a complete breakdown of utility upon closer inspection. We recently worked with the American Pickleball League brand to create a paddle customizer.
For AI shits and giggles, we decided to compare an AI-generated model with our handcrafted version. Below is a 3D view of both assets. Can you spot the difference?
What about now?


Below is the reference image we fed to the AI.

Side by side comparison.

After running it through Trellis (one of the leading open source image-to-3D model generators) and comparing it against the handcrafted pickleball paddle our internal team created, I was able to group the design patterns based on their actual usability.
Trellis generated the model in about 8 seconds, which felt fast. The model was only ~1mb, which felt small! Maybe this could work! Perhaps the customer buying their next pickleball paddle won't be able to tell the difference!
But when you peel back the onion, it starts to get messy.
There are clear clusters composed of the exact same artifacts found in almost all current generative 3D models: wobbly silhouettes, illegible text, and impossibly bad UV maps.


Have a closer look at the wireframes

Vertically, the AI models move from "chaotic noise" to "slightly smoothed noise," but never to actual structure. Horizontally, the textures go from "hallucinated gibberish" to "baked-in lighting."
Interestingly, the AI pickleball paddle seems to avoid straight lines entirely.

This differs drastically in behavior from the handcrafted model, which inherently understands that a manufactured object requires symmetry. This suggests that generative 3D is relatively unique in its concentration of unusable, uneditable "slop geometry."
Consistency also appears to be a myth in 3D generation. When looking at multiple generations from the exact same image, it's clear that the model just guesses wildly. Below is a sample of three different AI attempts side-by-side with the handcrafted handle.
Attempt 1
Attempt 2
Attempt 3
HandcraftedThis leads us to our next critical question: why?
To avoid getting extremely technical, here are the 3 leading reasons why the AI model is technically "lightweight" (1MB) but practically "heavy" (unusable).
Handcrafted 3D relies on "edge flow," which are lines of geometry that naturally follow the contours of the object. This allows for smooth reflections and easy editing. AI models generate meshes using "isosurface extraction" or similar volume-to-mesh techniques. This results in what the industry calls triangle soup.


If a client asks, "Can you make the handle slightly longer?", on the human model, I can select a loop of polygons and pull. The edit is done in 10 seconds.
On the AI model, I cannot. There are no loops. I would have to sculpt it like clay, destroying the texture in the process. It is actually faster to rebuild the entire model from scratch than to try and fix the AI's topology.
The most damning evidence is in the details. The AI sees pixels in the source image and projects them onto the 3D shape, but it possesses absolutely zero understanding of materials.


The AI (Left): The grip tape is a low-resolution smear. The branding text ("APL PRO") has been melted into an illegible blue blob. The lighting is "baked in," meaning if you rotate the paddle, the shadows don't move. It looks like a PlayStation 2 game texture.
The Human (Right): The grip tape has a PBR (Physically Based Rendering) normal map applied, meaning the bumps interact with light natively in the 3D viewer. The text is crisp geometry or a high-resolution decal.
We should also talk about the UV maps, which become nearly unusable. When you unwrap the 3D model to look at the raw texture file, you get this:

The fully generated texture from the AI looks like a big soup of random, terrible quality pixels that literally needs to be thrown out. There are no logical seams, making it impossible for an artist to open this in Photoshop and swap out a logo or fix a color code.
There is a misconception that AI 3D is "bloated." Models like Trellis do indeed output small meshes, which is good on paper. Actually, the AI model output a file that was around 1MB. Our handcrafted model is 800KB.
Our hand-crafted mesh actually has a lot more vertices. So the AI is comparable in size, right? Somewhat... but in efficiency, not even close.
The AI's 1MB is spent on useless, chaotic triangles that define a wobbly, asymmetric shape.The human's 800KB is spent on smart, dense geometry placed exactly where it's needed, paired with high-quality, editable texture maps.
It is not about the file size; it is about the quality per kilobyte. The AI gives you 1MB of trash. The human gives you 800KB of a production-ready e-commerce product.
It means that the human touch is still very much required. A good 3D modeler models intuitively, understanding where to place geometry for clean edge flow and how to create clean UV maps that can easily be modified.
The AI model on the other hand attempts to reduce the geometry to keep the file size low. But because it doesn't intuitively understand the shape, it removes geometry from important structural places (like the edge of the paddle) and leaves geometry in completely useless places.


The result is a silhouette that looks inherently lumpy. In e-commerce, trust is visual. If the digital product looks lumpy and cheap, the customer assumes the physical product is low quality.
The Reality of the "Time-Saving" Illusion:
If you use an AI model, you might think you are saving 4 hours of initial modeling time. However:
To fix these glaring issues, a 3D artist has to "retopologize" (manually trace over) the entire AI model. This salvage process actually takes longer than just modeling the object from scratch because the artist is constantly fighting against bad geometry.
Until AI models can natively output clean topology and separated PBR materials (roughness, metalness, normal maps) instead of just "colored geometry," they are effectively just 3D clip art.
Are they useful for background assets that will never be looked at up close? Maybe. Are they useful for selling a high-end, $200 physical product in a 3D product configurator? Absolutely not.
But as is seeming to be the norm with AI, a few years from now this post may not have aged well.
Written by Sean Chenoweth, Founder of Aircada. We build handcrafted, high-converting 3D product configurators for Shopify and e-commerce brands like American Pickleball League, Cinch Gaming, Flexifoil, the infamous Kazoos.com, and many more. No slop, just clean topology and squeaky clean textures.
Discuss your 3D projectSomehow this article explains perfectly, visually, how AI generated code differs from human generated code as well.
You see the exact same patterns. AI uses more code to accomplish the same thing, less efficiently.
I'm not even an AI hater. It's just a fact.
The human then has to go through and cleanup that code if you want to deliver a high-quality product.
Similarly, you can slap that AI generated 3D model right into your game engine, with its terrible topology and have it perform "ok". As you add more of these terrible models, you end up with crap performance but who cares, you delivered the game on-time right? A human can then go and slave away fixing the terrible topology and textures and take longer than they would have if the object had been modeled correctly to begin with.
The comparison of edge-loops to "high quality code" is also one that I mentally draw. High quality code can be a joy to extend and build upon.
Low quality code is like the dense mesh pictured. You have a million cross interactions and side-effects. Half the time it's easier to gut the whole thing and build a better system.
Again, I use AI models daily but AI for tools is different from AI for large products. The large products will demand the bulk of your time constantly refactoring and cleaning the code (with AI as well) -- such that you lose nearly all of the perceived speed enhancements.
That is, if you care about a high quality codebase and product...
"High-quality code can be a joy to extend and build upon." I love the analogy here. It is a perfect parallel to how a good 3D model is a delight to extend. Some of the better modelers we've worked with return a model that is so incredibly lightweight, easily modifiable, and looks like the real thing that I am amazed each time.
The good thing about 3D slop vs. code slop is that it is so much easier to spot at first glance. A sloppy model immediately looks sloppy to nearly any untrained eye. But on closer look at the mesh, UVs, and texture, a trained eye is able to spot just how sloppy it truly is. Whereas with code, the untrained eye will have no idea how bad that code truly is. And as we all know now, this is creating an insane amount of security vulnerabilities in production.
> Similarly, you can slap that AI generated 3D model right into your game engine, with its terrible topology and have it perform "ok".
You can't, because NVidia is selling all their chips to "AI" and you don't have any chips left to run the "AI" generated models on.
But what business value does high quality code bring? /s
Maintainability, which in the long run is more expensive in market opportunity costs than anybody admits.
We will get an interesting effect if AI plateaus around where it does now, which is that AI code generation will bring "the long run" right down to "the medium run" if not on to the longer side of the short run. AI can take out technical debt an order of magnitude faster than human developers, easily, and I'm still waiting for it to recognize that an abstraction is necessary and invest into putting on in the code rather than spending the ones already present.
Of course if AI continues to proceed forward and we get to the point where the AIs can do that then they really will be able to craft vast code bases at speeds we could never keep up with on our own. However, I'm not particularly convinced LLMs are going to advance past this particular point, to a large degree because their training data contains so much of this slop approach to coding. Someone's going to have to come up with the next iteration of AI tech, I think.
I wonder about heavy curation of data sets, and then only senior level developers in the Alignment/RLHF phases, such that the expertise of a senior level developer were the training. The psychology of those senior level developers would be interesting, because they would knowingly be putting huge numbers of their peers, globally, out of work. I wonder about if it would, then if course it will, and then I question if we're really that desperate.
I'm not so sure about that. All major software companies have enjoyed exponentially rising profits alongside steadily declining quality.
While at the same other companies have built entire business lines around fixing shit code(probably with more of the same though).
Which companies?
Until it breaks and can no longer be fixed because it is now all inscrutable spaghetti.
debt doesn't harm you until the carrying costs become to high v profits. Just have to hit that point (if is exists, maybe growth accelerates forever if you are optimistic).
If you only knew how the enterprise space does stuff you'd realize how little a priority maintainability is.
I'm grateful we had Java when this stuff was taking off; if any enterprise applications were written in anything else available at the time (like C/C++) we'd all suffer even more memory leaks, security vulnerabilities, and data breaches than we do now.
Now that's interesting, because I come from a world where enterprise level stuff was all done in C/C++ until quite recently, and with the shift to :web technologies" the quality of virtually everything has dropped through the floor, including the knowledge and skill level of the developers working on the tech. It is rare that I see people that have been working in excess of 10 years post graduation, if they went to college. The college grads have been pushed out by lower quality and lower skilled React developers that really do not belong in the industry at all. It's really a crime how low things have gotten, in such a short time: 10 to 15 years ago there were 2-3 decades of experienced people all over the place. Not anymore.
Trellis is like a year old and practically free. There are already better models to make comparisons to.
Because they all use latent diffusion, and many techniques use voxelized intermediate representations of 3d models, often generated from images, topology is bound to be bad.
There is a lot of ongoing research around getting better topology. I expect these critiques to still be valid as much as 2 years from now, but the economics of modeling will change drastically as the models get better
Some of the defects are attributable to the critical:
> AI models generate meshes using "isosurface extraction" or similar volume-to-mesh techniques
This creates the "lumpiness", the inability to capture sharp or flat features, and the over-refinement. Noisy surface is also harder to clean up. How do you define what's a feature and what's noise when there's no ground truth beyond the mesh itself?
Implicit surface methods are expensive (versus if-everything-goes-right of the parametric alternative), but they have the advantage of being robust and simple to implement with much fewer moving parts. So it's a pragmatic choice, why not.
3D generative algorithms might become much better once they can rely on parametric surfaces. Then you can do things like symmetry, flatness, curvature that makes sense, much more naturally. And the mesh generation on top will produce very clean meshes, if it succeeds. That is a crucial missing piece: CAD to mesh is hardly robust with human-generated CAD, so I can't imagine what it'd be with AI-generated CAD. An interesting challenge to be sure.