Carnegie Mellon University has presented LLM-Drone, a system that combines large language models (LLMs) with drones to expand additive manufacturing into settings where conventional 3D printing cannot operate. Published in Springer Nature, the study shows how drones equipped with magnetically interlocking blocks can assemble structures described through text prompts, achieving 90 percent build accuracy in laboratory tests. The approach demonstrates that language-driven planning can overcome the precision limits of aerial robots by dynamically revising construction plans during execution.
Additive manufacturing enables precise, layer-by-layer fabrication but typically requires fixed build platforms and controlled environments. Drones offer mobility to elevated or remote sites, yet extrusion-based methods suffer from vibration and drift during flight. LLM-Drone avoids deposition issues by using lightweight blocks designed with magnetic interlocks and a raised alignment hump that compensates for placement inaccuracies. Drones pick up and drop these blocks, while an LLM translates user instructions into structured coordinates and adapts designs when misplacements occur.

Three modules structure the pipeline. A planning module uses an LLM to generate JSON-formatted coordinates from user prompts. A computer vision module aligns these coordinates with the real-world frame using AprilTags and Bitcraze’s Lighthouse positioning system. A mechanical module, built on the Crazyflie 2.1 nanoquadcopter, executes block transport and placement. Bitcraze developed Crazyflie as a research platform with integrated motion tracking and a Python API, making it suitable for academic testing. Carnegie Mellon extended this ecosystem with a webcam, 3D printed blocks, and magnetic fixtures.
Evaluation compared Claude 3.5 Sonnet, GPT-4o, and Gemini Pro 1.5 across constrained and open-ended tasks. In quantitative tests using 15 constrained prompts, Claude achieved an average Intersection over Union (IoU) of 89.5 percent with a variance of 0.008, GPT-4o scored 80.4 percent with 0.027 variance, and Gemini Pro reached 67.2 percent with 0.031 variance. Inference times also varied: Claude processed in 680 milliseconds, GPT-4o in 920 ms, and Gemini Pro in 1,150 ms. Costs per 1,000 tokens differed, with Claude slightly higher but offset by its accuracy and consistency. In qualitative trials, evaluators graded outputs on a three-point scale, where 1 indicated both feasibility and recognizability of shapes such as stars or trapezoids, 2 indicated only one criterion met, and 3 met neither. Claude and GPT-4o consistently generated recognizable structures, while Gemini Pro struggled with format and feasibility.

Physical experiments used a five-by-five grid to construct shapes including a smiley face, diamond, square, and cross. Drift from the Lighthouse system, turbulence from ground effect, and incorrect magnet attachments caused misplacements. Vision-based corrections relied on YOLO-v8 detection of colored blocks, supported by Lucas–Kanade feature tracking and background subtraction to verify successful placements. When errors occurred, the LLM replanned: a misaligned cross was rotated to fit available blocks, a misplaced square was adjusted by resequencing, and a diamond incorporated blocks already dropped in error. Comparative runs with and without reprompting confirmed that feedback loops improved overall build outcomes.
Drone-based additive manufacturing research began with ETH Zurich’s cooperative quadrotor assembly experiments which demonstrated predefined structure assembly but required rigid localization. Later work employed multiple drones extruding material with feedback loops, but vibration-induced imprecision limited scalability. By shifting to block-based assembly, Carnegie Mellon sidesteps deposition challenges and integrates error correction directly into the planning layer.

Integration of language models into robotics has advanced since Google’s SayCan, which demonstrated LLM-based real-time planning for household robots. Huang and collaborators showed that semantic planners could revise multi-step instructions when encountering disturbances, while Vemprala extended similar methods to mobile robotics. Liang’s “Code as Policies” framework demonstrated that LLMs could interpret commands and generate executable code adaptable to environmental shifts. Within additive manufacturing, LLMs have also been applied to optimize printing parameters. LLM-Drone extends these principles to aerial systems, where instability is a persistent barrier.
Carnegie Mellon notes limitations of the current setup. Ground effect turbulence near surfaces destabilized drones, lighthouse drift degraded positioning accuracy, and magnetic inconsistencies occasionally prevented clean detachments. YOLO-based detection also produced inconsistencies that required additional image subtraction to confirm block placement. These challenges underline the controlled nature of the experiments and the gap between laboratory results and real-world deployment.
Future development will focus on scaling to larger drones with greater payload capacity, integrating electromagnets that can be switched on and off for precision control, and extending builds beyond single layers into fully three-dimensional structures. Researchers suggest that incorporating these advances would enable more robust on-site additive manufacturing in unstructured or hazardous environments.

The LLM-Drone code base has been made publicly accessible at https://sites.google.com/andrew.cmu.edu/llm-drone.
Limited spaces remain for AMA:Energy 2025. Register now to join the conversation on the future of energy and additive manufacturing.
Ready to discover who won the 2024 3D Printing Industry Awards?
Subscribe to the 3D Printing Industry newsletter and follow us on LinkedIn to stay updated with the latest news and insights.
Featured image shows model of Crazyflie pickup apparatus. Image via Carnegie Mellon University.

