Everything You Ever Thought About Human-Machine Teaming May Be Wrong:
Or Why I Prefer the Term “HAWCing” and You Should Too
Like all good blog posts, let’s start with an obvious pun to brighten the day. I’m sure what I’m about to say here will ruffle a few feathers… given I’m about to advocate for HAWCs or HAWCing… as opposed to the more in fashion approach of “Human-Machine Teaming.” But before we get to what HAWCing is, or why we should choose this approach and turn some of our favorite paradigms on their heads, let me tell you a little bit about Human-Machine Teaming (HMT) or as some would say Human-Autonomy Teaming (HAT), and why this seems to be the dominant way of viewing how humans and machines, or sometimes just “AI agents”, work together. Also… why this paradigm is never questioned is a problem… and recent endeavors to drink the CoolAid should give you pause.
Stepping into my way back machine, let’s start with discussions around lethal autonomous weapons systems (early 2010s). During this debate many worried over the extent to which autonomous weapons systems (AWS) would be used in armed conflict and how that use would be outside of human control, risk conflict escalation, make conflict initiation more likely, dehumanize armed conflict, and more. I’ve written and testified extensively on this and won’t say much about it here. What I will say is that during this time period, there was a bit of a shift of focus. The focus shifted to not just autonomous systems acting out in the world by themselves, but how they would be paired or “teamed” with humans.
Before this shift really gained more traction, there was of course, the suggestion that teaming was more about “centaur war” –arising from discussion about “centaur chess” – and how the marriage between a system and a human could provide the human with more capabilities than the human acting alone, and vice versa. Let’s put a pin in this one for a second.
In any event, the HMT, and now sometimes HAT, perspective began to gain more and more interest, and more and more funding. Over the last 15 years, many researchers in both government and academia have pursued various research endeavors on various aspects of HMT. More recently with the increased attention on AI, some of that focus has narrowed to teaming with AI Agents too. I cannot begin to cite all of this literature as it is mountainous.
But stick with me.
Ok, so we have interest bubbling up in autonomous weapons systems, we have a bit of shift to focusing on HMT, and now we see ton of research on HMT/HAT/H AI Teaming (HAIT?). But what if this entire paradigm in wrong? What if it is in focused on the wrong thing? What if the focus on the “team” is, for whatever reason, hindering our ability to actually not only make technological progress, but also obfuscating what is actually happening to humans, for humans, and by humans? What if there never was a “team” and it was just the human all the way down? Would that change the way we design, develop, deploy, test, evaluate, validate, verify, and govern such systems? I think it does.
I realize I’ve buried the lead. I did it intentionally. I want to suggest that instead of focusing of teaming we should really be focused on augmentation. So the real focus should be on “Human Augmentation With Computing” or HAWC. But let me tell you why and why changing this focus has important research and engineering implications, but also serious Ethical, Legal, and Societal Implications (ELSI) too. But before we get there, let’s lay out the general themes around “teaming.”
Teaming as a construct needs all sorts of things to make sense. There needs to be communication that is shared, intentional, and understandable. Then there *maybe* needs to be shared goals. The psychological literature talks about shared goals all the time… but that is human to human teams. There are also latent features about theories of mind, which of course, help with understanding and shared goals. Then there are also claims about needing to “understand” the idea of “commander’s intent”. The more I think about this one, the more I want to push back. I want to push back because humans have a hard time understanding intent, and intent cannot be fully captured by pattern recognition. Then there is the pesky thing that if you don’t *really* care about intent but outcomes, then, the commander’s intent is just a process document and not really about intention or intention recognition.
But the idea of a team is the core unit of analysis. The problem I have is that in discussions around HMT – there is NO TEAM. What it is, to date, is a discussion about human machine or human computer interaction to extend one’s intention or sphere of action beyond boundaries normally imposed in time and space. Think of it like human augmentation… which is another whole can of worms, I know. But if I can augment my capabilities, in time and space, with the aid of a device that is not a “teammate” but a thing… an object… then, that thing merely augments my abilities – my judgment and intention. My glasses allow me to see my computer screen more clearly as I write. They also allow me to see the street signs while driving at night. Is there an objective reality where other people’s eyes don’t need these augmented devices called glasses? Sure. But ..
It does not take away from the idea of extension and augmentation. HAWCing is just that. We do not have teams. People team together. Humans and animals team together. To date – to be clear – there is no model where a human can potentially team with a nonnatural entity. Why? Well that answer is a book that requires a lot more explanation. The short answer is that our Ais are either parrots or BS generators, and there is no cognitive or social or moral relationship with them.
If you buy my argument that there is a significant difference between HAWCing and HMT, then here is the big lift. We have been constructing our research projects and experiments all wrong. If I want to study teaming, why am I not focusing on the one unit of analysis – the team? Why am I focusing on only 1 participant (the human)?
I am not going to throw anyone under the bus here. But, what I will do is to say that constructing experiments about how people feel about the use of a tool – and how confident they are in that tool – is not actually going to tell you anything about 1) the team and 2) whether their responses are valid. On the first point, it is totally a wrong unit of analysis. On the second, all I want to say is the Dunning Kruger effect is well established to show that people are pretty confident about things to which they know nothing.
So the next time you decide that you are all for HMT, ask yourself—what problem does this solve? Is it a problem of responsibility? Is it attribution? Is it mission effectiveness? Is it… what? Define X.
Then, when you see there is no “team” and there is only “I”, start to ask questions about how you *would* build these systems, test these systems, train people on these systems. Because once we stop talking about teams, but manipulation of human cognition… then we have a better sense of what we are doing. We are augmenting humans with computing…. And that has a different research, empirical, and moral bar.
Thanks for the thoughts Heather. While not an exact analogy, we see different models in the self-driving car business:
- Robotaxis go do their own thing but there is a human "team" member remotely it can call upon for help.
- Computer drivers do the driving, but a person is supposed to jump in and help them, sometimes with advance notice and sometimes not (many different variations of this)
- Computer drivers say they are helping the person, but between confirmation bias and automation complacency too often the human is reduced to a moral crumple zone
- Computers help save the human driver from a bad situation (think automated emergency braking)
I agree that the teaming aspects have a lot of issues. In the car world it shows up as problems determining whether the person has liability for a crash.