I wanted to do a quick follow up on the human-automation vehicle ideas I posted last week. I got some feedback which made me think about my concepts a little more and with a little more clarity.
The main comment that made me revisit the post was in regards to the legitimacy of the human-automation team. While I still believe that this is the best approach, the feedback made me fully consider some of the context around this approach. Namely, research into this is generally focused on human ‘experts’. The human driver is competent enough to make good decision when necessary, provided they have had the necessary training.
However, as driverless vehicles take on more and more of a role, developing human experts is going to become more difficult. Currently, most drivers are of sufficient skill (this is a bit of an assumption) to wisely take corrective actions when necessary. But with little opportunity to train and drive, the number of drivers with these skills will wane. New drivers of age will get almost no time behind the wheel, so how will they ever become experts.
The same problems holds in other forms of transportation, like airlines and trains, both of which rely on automation. However, there are far fewer pilots/operators than there are drivers. Let’s assume that there are 200,000 trained commercial pilots in the world (I could not easily find a count easily). It makes sense to train and maintain the skill level for this number of pilots. But there are probably close to a billion drivers in the world. The case for training and maintaining expertise for this many people is harder to make, especially for a less than 0.1% chance of needing to takeover. The time, money, and energy it will take to train that many drivers for such little impact does not make much sense.
So we are stuck in a bit of a conundrum. The best, most error resistant system involves a human expert-automation team, but it is poor economics to actually develop human experts at scale.
So my stance has shifted slightly (in a little less than a week, a new record). The first autonomous vehicles should still have room for the human operator to understand the actions of the computer and jump in as necessary. The concepts I tossed out last week I feel are still largely valid. However, as current and near-future drivers start to age, developing human experts will just not be practical. At this point, we should probably start the transition to fully autonomous. More technologies, like car-car communications, will be introduced that will make the automation even better and more fault resistant (though never fully fault free), and we will have passed gracefully through the awkward growing phase to get there. The promise of human expert and technology is great, but human novices and technology is just asking for trouble.
The main comment that made me revisit the post was in regards to the legitimacy of the human-automation team. While I still believe that this is the best approach, the feedback made me fully consider some of the context around this approach. Namely, research into this is generally focused on human ‘experts’. The human driver is competent enough to make good decision when necessary, provided they have had the necessary training.
However, as driverless vehicles take on more and more of a role, developing human experts is going to become more difficult. Currently, most drivers are of sufficient skill (this is a bit of an assumption) to wisely take corrective actions when necessary. But with little opportunity to train and drive, the number of drivers with these skills will wane. New drivers of age will get almost no time behind the wheel, so how will they ever become experts.
The same problems holds in other forms of transportation, like airlines and trains, both of which rely on automation. However, there are far fewer pilots/operators than there are drivers. Let’s assume that there are 200,000 trained commercial pilots in the world (I could not easily find a count easily). It makes sense to train and maintain the skill level for this number of pilots. But there are probably close to a billion drivers in the world. The case for training and maintaining expertise for this many people is harder to make, especially for a less than 0.1% chance of needing to takeover. The time, money, and energy it will take to train that many drivers for such little impact does not make much sense.
So we are stuck in a bit of a conundrum. The best, most error resistant system involves a human expert-automation team, but it is poor economics to actually develop human experts at scale.
So my stance has shifted slightly (in a little less than a week, a new record). The first autonomous vehicles should still have room for the human operator to understand the actions of the computer and jump in as necessary. The concepts I tossed out last week I feel are still largely valid. However, as current and near-future drivers start to age, developing human experts will just not be practical. At this point, we should probably start the transition to fully autonomous. More technologies, like car-car communications, will be introduced that will make the automation even better and more fault resistant (though never fully fault free), and we will have passed gracefully through the awkward growing phase to get there. The promise of human expert and technology is great, but human novices and technology is just asking for trouble.