Images Enable Deere Autonomous Tech

Here's How Deere's Stereo Cameras, Half-Billion-Image Library Enables Autonomous Farming

Dan Miller
By  Dan Miller , Progressive Farmer Senior Editor
The autonomous tractor must understand what it sees. Deere is building that system onto its 8R tractors with six pairs of stereo cameras and a vast image library to move 40,000 pounds of steel and technology around a field with no human in sight. (DTN/Progressive Farmer photo by Dan Miller)

The immense image file John Deere is building to give live, in-field vision to its autonomous 8R tractor once did not account for one single, potential "hazard:" birds.

Deere found that its 8R autonomous tractor was, at times, stopping for birds within its camera field view. The visioning system that tells the tractor to proceed, or to stop when it does not recognize a potential obstacle, had not yet fully accounted for birds flitting around in front of it.

"We were not expecting to see so many (birds)," said Jorge Heraud, vice president for Automation and Autonomy at Deere's Blue River arm. "The tractor was stopping, saying there is something here, and the birds were staying." Deere built an image category for this moment, a class it calls floaters. With birds in its path, the autonomous tractor is instructed to continue its path. "The birds will fly away," the tractor is told. "You should not stop for birds."

(For a look at Deere's 8R autonomous tractor and See & Spray technology operating at its new John Deere Technology Test Farm outside Coupland Texas, go to https://www.dtnpf.com/…)

P[L1] D[0x0] M[300x250] OOP[F] ADUNIT[] T[]

Deere has a growing library of 500 million images it can pull from to bring visioning capabilities to its product line -- for example, the stereo cameras of autonomous tractors, See & Spray cameras and processors, and its construction equipment lines. Of those images, about 500,000 are labeled for work in the real world. Image labeling is a process where specific patterns of an image are assigned to specific categories, that floaters category for example. Labels make images readable by machines, training them to recognize objects with ever greater accuracy. When an 8R autonomous tractor "sees" a rock in front of it, it does not make the "go-no-go" decision by comparing actual images. Rather, the camera image is compared to patterns of pixels mapped within Deere's image classifier algorithm. That algorithm runs locally on the machine and is updated frequently.

"We label images to understand what is in that scene," said Joe Liefer, senior project manager for Autonomy at Deere. "And, ultimately, label individual pixels (to say) this is ground, dirt, sky, tree, etc. We can trigger a stop. That's done locally through (the) ethernet of the video processing unit. It determines something's in front of us, sends a request to stop, the tractor takes over and stops."

The 8R autonomous tractor gets its vision from six pairs of stereo cameras and ultra-fast processors. The Deere-developed camera pairs give the tractor 360 degrees of vision for obstacle detection and calculation of distance. The pixels (small dots on a computer screen, for example) from images captured by the tractor cameras are classified in about 100 milliseconds -- roughly the blink of an eye -- and determines if the machine continues to move or stops. The cameras have extremely high resolution, what the industry calls high dynamic range. HDR technology is in your iPhone (settings > photos). Instead of a single image, the camera takes three (in the case of the iPhone) at different exposures. HDR creates images with more usable data than a single exposure in light ranging from bright to dark, or shadows. Deere is using LED lighting to aid the sensitivity of its high-resolution cameras as it builds toward nighttime autonomous operations.

"The autonomous tractor cameras (are) unique. It uses two cameras, like the human eye as a pair with a cross angle to them," said Deere's Liefer. "It allows us to get depth from those images. Depth allows us to localize objects."

Deere samples a great many images from its stereo cameras to improve its algorithms (Deere says it currently has less than 50 autonomous tractors working farms). But Deere does not store every image. "We're not recording 24/7. We are asking growers' permission to essentially sample data coming off their fleet. We can run an algorithm in the background to look for new and interesting images. Anytime it stops, we grab that data. We can flag that and upload that single image for training (machine learning) and evaluation."

Deere has added a new layer of functionality to its autonomous system. It is inserting itself into the live, in-field, go-no-go loop with the purchase of SparkAI. SparkAI combines people and technology to resolve AI edge cases (go or no-go, for example), false positives, and other exceptions in live production scenarios. No matter the sophistication, artificial intelligence will always confront exceptions -- "Is that a ditch I can't drive through, or a shadow and I can proceed." Add rain, dust, snow, blowing debris to increase the complexity of image detection. SparkAI, with its human mission specialists and proprietary software, offers a kind of situational triage solving, Deere said, resolving in-field issues within seconds.

"As we develop autonomy, you have situations where (equipment) stops," Leifer said. "(With SparkAI), we can clear that false positive and automatically restart the tractor without having to notify the grower in real-time." (SparkAI) uses a human to call it, to continue or not. If it's a real obstacle, we'll send it to the farmer (for his decision). But the idea is not to distract the farmer and lower his productivity." The grower will get a "stop" report at the end of a field mission. But the grower will not have to deal with every stop report.

SparkAI is a New York-based startup that helps robots resolve edge cases in real-time (yes, the 40,000-pound, autonomous 8R is a sort-of robot). John Deere was a SparkAI customer for a few years prior to the acquisition.

Dan Miller can be reached at dan.miller@dtn.com

Follow him on Twitter @DMillerPF

P[] D[728x170] M[320x75] OOP[F] ADUNIT[] T[]
P[L2] D[728x90] M[320x50] OOP[F] ADUNIT[] T[]
P[R1] D[300x250] M[300x250] OOP[F] ADUNIT[] T[]
P[R2] D[300x250] M[320x50] OOP[F] ADUNIT[] T[]
DIM[1x3] LBL[article-box] SEL[] IDX[] TMPL[standalone] T[]
P[R3] D[300x250] M[0x0] OOP[F] ADUNIT[] T[]

Dan Miller