Sharp eyesight can’t reliably keep you vertical if you have lost your sense of balance, which shows just how important it is to have several different senses at your disposal.
That’s the idea behind a system recently proposed by two Korean engineers to make a car better at finding itself a parking spot. Rather than depend on ultrasound sensors mounted on the grille, as some parking programs do, or on cameras mounted on all sides, the researchers fuse the two kinds of sensor. Three, if you count data from the odometer, which measures the movement of the car.
The auto industry is working on ways to automate the entire parking process, not just the last bit. And Audi, Volvo and Nissan have all shown off parking-space finders that perform well in controlled circumstances, say by linking to a parking lot’s WiFi system. But to work alone in an uncooperative world, cars will have to wring more data from their existing sensors.
Jae Kyu Suhr, an IEEE member, and Ho Gi Jung, a senior member, recently laid out a way to do that. The researchers are affiliated with Hanyang University, in Seoul; their work was supported by the Hyundai Motor Company.
Sensor fusion is tricky because the various sensors look at an object from different angles. The researchers solve the problem with a lot of math and some plain old ingenuity.
First, the car moves past a parking area, scanning for the marked edges of parking slots and for obstacles (such as a parked vehicle). The several cameras and the two ultrasound sensors of course provide different vantage points—what is visible to one sensor may be obscure to another.
At a given point, called a frame, the car processes all the information available to classify the parking area according to its structure—an array of rectangles laid out orthogonally, or on a bias, or in a staggered fashion, and so on. Now that the car knows what it’s dealing with, it knows what to look for: say, for the characteristic edges of a staggered rectangular parking spot.
By breaking down the job in this hierarchical fashion, the researchers say, their system can keep the computational time to just 32 milliseconds, compared with 82 ms for a vision-only system. “These results reveal that the proposed system can surely operate in real time,” they write.
Next, the car moves on to the next vantage point, giving each sensor system the chance to get a better view, at least of some features. Because the odometer tracks the car’s position, the system can figure out the new angle of observation and use it to update its earlier estimate. Besides improving accuracy, this continuous updating helps the system handle roads that aren’t perfectly flat.
Finally, the system offers a selection of possible spots to the driver, who selects one by punching a touchscreen. From this point, the system works like existing car-parking programs.
There are a few drawbacks. The system can’t work as billed at night or in dimly lit, reverberating underground parking lots. To overcome that problem the researchers are developing new algorithms and working with specialized equipment, such as cameras with greater range. Rather than try to make one size fit, all, they’d have independent programs to work in the light of day, at night, and in closed spaces.