By Dave DeFusco
Chengyi Liu, a student in the M.S. in Artificial Intelligence, is helping teach machines to recognize human activities in a way that’s smarter, safer—and more private. At the Katz School’s Graduate Symposium on Science, Technology and Health, Liu presented his research on improving human activity recognition using millimeter wave (mmWave) radar and large language models, the AI engines behind tools like ChatGPT.
“Traditionally, activity recognition has relied on cameras or wearable sensors,” said Liu. “Cameras raise privacy concerns and wearables aren’t always practical, but mmWave offers a way to recognize motion without seeing or touching the person.”
Think of mmWave like radar that bounces signals off a person’s body to sense how they’re moving. This technology can enable applications such as fall detection for seniors, fitness coaching and motion tracking in virtual reality without needing a camera in the room. But there’s a catch.
“To make mmWave human activity reaction work well, you need a lot of labeled training data collected from real people in many different settings,” said Dr. Yucheng Xie, Liu’s advisor and an assistant professor in the Department of Graduate Computer Science and Engineering. “That’s extremely time-consuming and expensive.”
Liu’s solution was to use AI to fake it—realistically. He and Dr. Xie created a framework that combines large language models with 3D motion simulation. The idea is to use LLMs to write descriptions of human activities, things like “a firefighter rapidly running forward” or “a child slowly turning in a circle,” and then turn those words into biomechanically realistic digital motion.
“In our system, the LLM automatically generates 50 different versions of each activity, like walking or running, and varies the speed, direction or body orientation,” said Liu. “Then we use a motion synthesis model to create 3D skeleton movements from those descriptions.”
To make sure the synthetic motions look real and follow human anatomy, Liu applies what’s called an inverse kinematics filter. This step weeds out any impossible movements, like bending a knee the wrong way or twisting an arm unnaturally. The result: a library of realistic, diverse human movements generated from simple text, and because each movement is described in words first, the system can adjust to new scenarios just by updating the descriptions.
Once the movements are generated, Liu simulates how mmWave radar would see them. Using a digital human body model called SMPL and a technique known as ray tracing, the team builds a 3D representation of how radio waves would bounce around a room and off the person’s body.
“This is where the environment comes in,” said Liu. “Walls, furniture and body shape all affect how mmWave signals behave. Our system takes those factors into account to generate more accurate radar data.”
By combining personalized 3D human meshes with environment-aware simulation, Liu can create synthetic mmWave data for a huge range of realistic scenarios without ever needing to record a person in a lab. With the synthetic mmWave data in hand, the final step is to teach the system to recognize what activity is being performed. Here, Liu again uses LLMs not just for writing motion descriptions, but for interpreting them.
“The language model helps match the mmWave signals to the activity descriptions it helped create,” he said. “That way, the recognition system becomes more adaptable to different settings and individuals.”
This feedback loop, from activity name to description to simulation and back to recognition, makes Liu’s system flexible, efficient and capable of learning from synthetic data. Although the research is still in its early stages, the implications are big. By reducing the need for costly real-world data collection, Liu’s framework could speed up development of privacy-safe motion recognition tools for healthcare, sports, virtual reality, and more.
“This work shows how combining language models with physical simulation can unlock powerful new capabilities,” said Dr. Xie. “It’s a great example of what interdisciplinary AI research can do.”