Someday someone will come up with an equation that precisely defines the tipping point between our natural laziness and willingness to give up privacy to alleviate some of it. I don’t think today is that day.
Years ago I wrote about a robot that you could teleoperate from anywhere in the world via WiFi. We tried it in our offices and I even drove it around my home where it scared my family. Back then, we didn’t think too much about the privacy implications because it was me, not a third party, navigating the robot and seeing what it could see.
TechRadar AI Week 2025
This article is part of TechRadar’s AI Week 2025. We cover the basics of AI and show you how to get the most out of the likes of ChatGPT, Gemini or Claude, along with in-depth features, news and the main talking points in the world of AI.
With ads for 1X’s still-unreleased robot appearing on Subway ad screens, as well as testing in only limited ways in some reporters’ homes, consumers are being asked to consider their willingness to invite the 5’6″, 66 lb humanoid robot into their homes. While the $20,000 robot (or $499 per several unknown scenarios in your home In these cases, 1X technicians may, with your permission, take over, remotely operate and apparently train the robot’s Redwood AI.
In even casual conversations with people, this news gives them pause, but we decided to survey our nearly half a million WhatsApp followers with this question:
“1X Neo is a new $20,000 home robot that can be controlled remotely by a human. But how do you feel about an Al robot learning skills based on your home data?”
While the majority (409 people) said they were undecided but still thought a “household robot would be great,” a significant number (340 respondents) were decidedly less sanguine, choosing “Sounds terrible and a total breach of privacy.”
73 described the Neo as what they’ve “always dreamed of” and only 48 were happy to let the 1X and Neo do their training at home.
I understand the concern and to be honest it is far from new. Back in 2019, when Sony unveiled its latest update to its AIBO robot dog, some expressed concerns about a mobile robot with a camera in its snout, the embedded facial recognition AI (useful for AIBO to remember family faces), and Sony’s access to all collected data.
At the time, Sony stored data locally and in its cloud, but hashed it in a way that could not be identified as personal information. Still, the robot could not be sold in Illinois because its capabilities fell outside the state’s Biometric Information Privacy Act.
With the 1X’s far more powerful AI and models, one can assume that privacy should triple.
I asked technology and regulatory attorney Kathleen McGee via email how concerned consumers should be.
McGee, who previously served as Attorney General and most recently Bureau Chief of Internet & Technology at the New York Attorney General’s Office and is now a partner in Lowenstein Sandler’s Data Privacy, Security, Safety & Risk Management Practice, told me that the data companies like 1X collect “range from the mundane (where you put your dish detergent) to the very personal of your home, video footage and real-time images of your home and residents, including children). Any data collection that is so sensitive and ongoing, requires high-level safeguards to ensure that the data is anonymized, stored only as needed, and that the AI model(s) are trained in accordance with both ethical and legal standards.
Clarity, notes McGee, is key. “Intended users of these products should be very clear about how the data is being used and shared and what rights users have to delete data – once an AI model is created and trained on your sensitive data, it’s virtually impossible to relax completely.”
However, 1X makes clear in its FAQ that while data collected from “real-world tasks” is used to build NEO’s base intelligence and increase both its capabilities and security, “we do not use this data to build a profile of you, nor do we sell this data. If you do not want to participate in helping to further improve NEO, you can always opt out.”
Data aside, a robo-roving camera attached to fully articulated limbs and hands raises the specter of a remote search of your home. Reddit is well stocked with these concerns.
Intended users of these products should be very clear about how the data is used and shared
Kathleen McGee
In a scathing post about privacy in the Neo robot, Reddit user GrandyRetroCandy wrote:
“If law enforcement goes to the 1X office. Says ‘we have a warrant’. They can order an operator to take control of the Neo Robot, and while you’re out shopping or away from home, they can have this robot look through your wallet. Your diary. Your house. Your drawers. And see everything about you.,”
It sounds scary, but GrandRetroCandy quickly clarified,
“Technically, that part isn’t legal. It’s technically possible (it could be done), but it’s not legal. But if they have a warrant, they can see all the camera footage saved from your Neo Robot. That part is legal.”
McGee also told me, “Another concern in general with these types of domestic products is the potential exposure of data that a user may believe is private to them but may be subject to a subpoena, search warrant, or intrusion by the threat actor. The privacy concerns for users cannot be separated from the security issues.”
AI needs your data…and you need your privacy
Basically, the idea of someone suddenly using the X1 Neo to roam your home and go through your stuff is way beyond the realm of possibility, if not possibility.
The truth is that humanoid robots will never be practical and useful without healthy amounts of data input from all users and homes, especially in the early days when they are bound to make mistakes.
For robotics and automation, one of the major advances in recent years has been simulated training. It has helped autonomous driving and many of these early humanoid robots. And yes, we can see how difficult it is to get humanoid robots prepared for the unexpected.
At this point, the 1X Neo Beta is so unprepared that most of its abilities are teleoperated. Getting humanoid robots ready for the spotlight remains hard work. In Russia, the Idol robot was so unprepared for the bright lights of fame that it spectacularly face-planted.
Freely providing data that cannot be used to invade our privacy will help these robots learn and improve quickly, but there must be limits and controls.
A large part of the responsibility lies with companies like 1X, especially those that develop AI. As McGee pointed out in an email, “Many jurisdictions have privacy laws, and for AI developers the focus should always be on compliance with the most stringent of these rules. Again, both ethics and law have a place here, and we advise our clients to build in a strong foundation of trust and transparency to ensure stability and durability of their AI.”
As of April this year, only 20 US states have data protection laws. At least in the EU, they have GDPR (General Data Protection Regulation), which is so strict that some AI technologies have been withheld from the 27 countries that make up the EU. The UK has an almost identical GDPR.
There’s probably a happy medium between what we have here in the US and GDPR, but the intention should be the same: safely training a humanoid robot army that knows how to help us and even do our household chores for us without triggering massive privacy alarms.

The best MacBooks and Macs for all budgets
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



