It will soon be easy for self-driving cars to hide in sight. We should not leave them.

It will soon become easy for self-driving cars to hide in sight. The lid sensors on the roof that currently mark many of them are likely to be smaller. Mercedes cars with the new, partially automated Drive Pilot system, which carries its lidar sensors behind the front grille of the car, are already indistinguishable to the naked eye from ordinary cars operated by humans.

Is that a good thing? As part of our Driverless Futures project at University College London, my colleagues and I recently completed the largest and most comprehensive survey of citizens’ attitudes towards self-driving cars and the rules of the road. One of the questions we decided to ask, after having more than 50 in-depth interviews with experts, was whether autonomous cars should be considered. The consensus of our sample of 4,800 British citizens is clear: 87% agree with the statement “It should be clear to other road users when driving a car” (just 4% disagree, the rest were unsure).

We sent the same questionnaire to a smaller group of experts. They were less convinced: 44% agreed and 28% did not agree that the status of a car should be announced. The question is not straightforward. There are valid arguments on both sides.

We can argue that, in principle, humans need to know when they interact with robots. That was the argument in 2017, in a report commissioned by the UK’s Engineering and Physical Sciences Research Council. “Robots are produced artifacts,” it said. “They should not be designed in a deceptive way to exploit vulnerable users; instead, their machinery should be transparent.” If self-driving cars are properly tested on public roads, then other traffic users could be considered as subjects in that experiment and should be given something like informed consent. Another argument for labeling, this one practically, is that – as with a car driven by a student driver – it is safer to give a wide mount to a car that may not be behaving than one being driven by a well-trained man.

There are also arguments against labeling. A label could be seen as an abdication of the responsibilities of innovators, implying that others must recognize and accommodate a self-driving car. And it could be argued that a new label, without a clearly shared sense of the boundaries of technology, would only add confusion to roads already replicated with distractions.

From a scientific perspective, labels also influence data collection. If a self-driving car learns to drive and others know this and behave differently, this can contaminate the data they collect. Such a thing seemed to be on the mind of a Volvo driver who told a reporter in 2016 that “just for the safe side,” the company would use unmarked cars for its proposed self-driving test on British roads. “I’m sure people will challenge them if they’re marked by doing really hard braking for a self-driving car or putting themselves in the way,” he said.

On the balance, the arguments for labeling, at least in the short term, are more convincing. This debate is about more than just self-driving cars. It cuts to the heart of the question of how new technologies should be regulated. The developers of emerging technologies, who at first often portray them as disruptive and world-changing, are apt to paint them as simply incremental and unproblematic as once supervisors approach. But new technologies do not just fit into the world as it is. They change worlds. When we realize their benefits and make good decisions about their risks, we need to be honest about them.

To better understand and manage the use of autonomous cars, we need to dispel the myth that computers drive just like humans, but better. Management professor Ajay Agrawal, for example, has argued that self-driving cars basically just do what drivers do, but more efficiently: “People have data coming in through the sensors – the cameras on our faces and the microphones on the sides of our heads – and the data comes in, we process the data with our ape brains and then we take actions and our actions are very limited: we can turn left, we can turn right, we can brake, we can accelerate.