Blender (1994) is the go-to free (yes, free) open-source software for designing 3D objects. If you’re willing to learn the user interface (UI) and design terminology, you can create pretty much anything you can think of.
But for the majority whose eyes glaze over at words like “ambient occlusion” or “subsurface scattering” you’d need a more intuitive creation tool to reach the masses that will supposedly be spending hours in the Metaverse in several years time.
The growing consensus among the developers of a Metaverse is that to truly become a Metaverse, you need something similar to the internet – but in 3D. A decentralized system with interoperability, so you can take your avatar and assets from one virtual world to the next.
Also, like the internet, everyday users should be able to generate their own worlds, tools, and toys in the Metaverse. In-world building tools should also be collaborative in real-time, with tools like Nvidia’s Omniverse Cloud. Current 3D modelers design 3D objects on a flat screen on their desks or on their laptops. The future of 3D design would involve more present creators, where you can walk around your 3D models as you construct them.
1. Voice and Text Tools
Many are familiar at this point with AI 2D generated art. This technology is quickly accelerating and now Google is developing a text-to-3D-object tool, called “dream fusion”. Meta is also working on this, with an in-world builder tool (builder bot) where you can talk your world into existence. A user may want to call upon certain shortcuts like an array modifier (an array / series of the same object copied). Instead of awkwardly trying to add the modifier the conventional way, voice tools should be integrated so an artist can simply say “make 5 more of this object along this line” and they appear.
2. Shape Recognition
The tools will need to understand context as well. A world might already be in the works, and if you’re not starting from scratch, you might only want a few edits here and there. The building tools will have to understand what a model is. If you say you want to generate something like “A fox wearing a kimono” the 3D generator should create a fox wearing a kimono, but should make the fox and kimono two separate objects and not meshed together. You should also be able to say “I want the arms a bit longer” or “I want the fox’s fur to be a bit shorter” and the creator tool should automatically recognize which parts to edit.
Shape recognition will also be critical for automatic character rigging. What is rigging? It’s when you add a skeletal system to a character object in 3D so you can animate the character through the skeleton. Pixar characters for example have a lot of moving parts, particularly in the facial area. This is normally a tedious process for 3D modelers, and if an AI can do this automatically, that would be a huge time saver. Imagine sculpting a 3D person with your hands, and when you were done it was already ready to walk and talk.
3. Perfect UV Mapping and Topology Optimization
Um, what now? When you design something in 3D, the object has a bunch of little points (vertices) that you can push and pull to change the shape of the overall object. The more vertices you have, the more detailed your shape can be. These points are connected by lines (edges).

The highlighted yellow square is called a “face”. A face is a flat surface enclosed by edges. UV mapping is a technique where you add a texture to a face on the 3D object. So you’re basically coloring it in. Effective UV mapping makes the texture look clean and smooth on the object.

In the object above (adorable cow), you can see on the right the UV mapping technique (yes it looks like the poor cow was flayed – this is the sorta stuff 3D modelers look at a lot). Anyway, an AI should be able to do the same effectively, and spare designers from looking at it.
The second thing, topology optimization, is cleaning up (deleting) unnecessary vertices on the object. Why? To save data. Video game designers do this a lot with their models, and have become experts at only putting in as many vertices needed to create a recognizable object or character.

On the left you see a bad example of topology, with a lot of unnecessary vertices creating a mess. On the right, you see a clean predictable layout of vertices that make a smoother shape that takes up less data. Effective automatic retopology tools can convert these objects from the one on the left to the one on the right instantly. Perhaps AI can be trained on thousands of video game models to make this work. This way, users can simply use their hands to mold an object like they would with a physical one, and the creation software does the retopology work.
4. Automatic File Conversion
Digital pictures have many file formats, such as .jpg and .png. 3D objects also have many different file extensions. There’s .obj, .stl, .dae, and more. In an interoperable Metaverse, 3D objects should be able to export automatically into the type of file it needs to be on a separate platform. It would be exhausting to create a type of clothing for your avatar in one platform, then be met with “this platform does not recognize your object”.
5. Ownership Options
Right now if you create something on Meta’s platform for example, that object belongs to Meta. Creation tools should be truly decentralized and open source to allow for true ownership of tools. NFTs (non-fungible tokens) at their base tech are immutable, decentralized forms of ownership records. The platform Ready Player Me, allows you to create an avatar that can travel to multiple different platforms. The avatar is an NFT.