Design

Blind man develops new AI-based design workflow for vision-impaired 3D printing

A blind Reddit user with the handle Mrblindguardian has developed a novel artificial intelligence (AI)-based workflow that allows him to design and 3D print custom models from scratch.   

Posting to Reddit on 27 January, Mrblindguardian explained how he has leveraged ChatGPT and Luma AI to design 3D models using text inputs and AI-powered image descriptions. 

Following a trial and error process, which included tweaking his design in Luma based on ChatGPT descriptions, Mrblindguardian successfully 3D printed a unique one-winged dragon model on his Bambu Lab X1 3D printer.         

Mrblindguardian’s novel 3D design workflow has generated a lot of positive feedback from the maker community, already receiving 167 upvotes over two Reddit posts. The story has also been covered in a hackster.io article.   

“Can 3D printing and design be done completely on one’s own as a completely blind person? Yes, it can,” stated Mrblindguardian in the Reddit post

“This is my first completely own, independently designed 3D model of a dragon. Like me, it has a disability, it’s missing a wing. Created with many hours of work, my own imagination, ChatGPT and Luma AI,” added the self-described “blind man in 3D land.”  

The 3D printed dragon model. Photo via Mrblindguardian copy
The 3D printed dragon model. Photo via Mrblindguardian.

AI-based 3D modeling for the blind 

Mrblindguardian’s design process began with him drafting a text description of what he thinks a dragon looks like, and cross referencing this with descriptions from Google. He then used Luma AI, an AI-based generative design tool, to create a draft 3D model from his text inputs.

To ensure that the Luma model was accurate, Mrblindguardian took screenshots of the model, and uploaded them to ChatGPT to generate descriptions. Based on this feedback, he then tweaked the design and repeated the process until the ChatGPT description matched the target design. 

An stl of the 3D model was then imported into an accessible slicer that is compatible with screen readers, such as AstroPrint and Kiri:Moto. During the slicing process, Mrblindguardian uploaded screenshots to ChatGPT, which described what is visually on the screen. Each time the model was rotated or scaled, a new screenshot was uploaded, enabling Mrblindguardian to accurately manipulate his model in the slicer.

Once happy with the result, a new stl file was exported and sent to a friend who could visually inspect it and verify that the file was ready for 3D printing. 

The completed file was then imported into the 3D printer’s slicer. After choosing a color and slicing the model, the dragon model was 3D printed. 

Mrblindguardian acknowledges that this is a lengthy process, with the dragon model taking several hours to slice and 3D print. However, the process certainly works and has enabled Mrblindguardian, who lost his sight at the age of two, to 3D print a unique model from scratch.    

According to Mrblindguardian, this process would be easier if the Bambu Lab slicer was made more accessible. 

“If Bambu Lab would make their slicer accessible, or at least, somewhat more, that would be a great help,” Mrblindguardian commented on Reddit. “Right now, I am using so many different softwares, because they all have different accessibility issues.” 

Top view of the 3D printed dragon model. Photo via Mrblindguardian
Top view of the 3D printed dragon model. Photo via Mrblindguardian.

AI-powered design for 3D printing

Developments in AI are enabling novel processes for 3D design, making 3D printing more accessible to more people. 

London-based 3D printing software developer Ai Build’s AI-powered Talk to Ai Sync software allows users to generate 3D printing geometries from simple text inputs, such as “slice the part with 2mm layer height.”  

Although targeted towards more industrial applications, AI Build’s software has been designed to increase the accessibility of large-format 3D printing by reducing the skill gap. In an interview with 3D Printing Industry, AI Build CEO Daghan Cam said that the company’s goal is to make 3D printing “super accessible to inexperienced users by making the user experience really smooth.”      

“We are trying to make it super easy for anyone. Even a designer that doesn’t have much 3D printing background should be able to send their designs into machines,” added Cam.

Similarly Nvidia, a GPU manufacturer, has developed Magic3D. This generative AI tool can produce 3D models from text prompts. The 3D mesh models can include colored textures and can be easily generated within 40 minutes. 

The resulting 3D models are designed to be used in CGI art scenes or video games, and cannot currently be 3D printed. However, potential applications within 3D printing, such as exporting the AI-generated mesh as a 3D printable file, are clear to see. 

Subscribe to the 3D Printing Industry newsletter to keep up to date with the latest 3D printing news. You can also follow us on Twitter, like our  Facebook page, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.

Are you interested in working in the additive manufacturing industry? Visit 3D Printing Jobs to view a selection of available roles and kickstart your career.

Featured image shows the 3D printed dragon model. Photo via Mrblindguardian.

No Newer Articles