Thanks for the thoughts Heather. While not an exact analogy, we see different models in the self-driving car business:
- Robotaxis go do their own thing but there is a human "team" member remotely it can call upon for help.
- Computer drivers do the driving, but a person is supposed to jump in and help them, sometimes with advance notice and sometimes not (many different variations of this)
- Computer drivers say they are helping the person, but between confirmation bias and automation complacency too often the human is reduced to a moral crumple zone
- Computers help save the human driver from a bad situation (think automated emergency braking)
I agree that the teaming aspects have a lot of issues. In the car world it shows up as problems determining whether the person has liability for a crash.
Thanks for the thoughts Heather. While not an exact analogy, we see different models in the self-driving car business:
- Robotaxis go do their own thing but there is a human "team" member remotely it can call upon for help.
- Computer drivers do the driving, but a person is supposed to jump in and help them, sometimes with advance notice and sometimes not (many different variations of this)
- Computer drivers say they are helping the person, but between confirmation bias and automation complacency too often the human is reduced to a moral crumple zone
- Computers help save the human driver from a bad situation (think automated emergency braking)
I agree that the teaming aspects have a lot of issues. In the car world it shows up as problems determining whether the person has liability for a crash.