• Light processing improves robotic sensin

    From ScienceDaily@1337:3/111 to All on Mon Sep 14 21:30:44 2020
    Light processing improves robotic sensing, study finds

    Date:
    September 14, 2020
    Source:
    U.S. Army Research Laboratory
    Summary:
    A team of researchers uncovered how the human brain processes
    bright and contrasting light, which they say is a key to improving
    robotic sensing and enabling autonomous agents to team with humans.



    FULL STORY ==========================================================================
    A team of Army researchers uncovered how the human brain processes
    bright and contrasting light, which they say is a key to improving
    robotic sensing and enabling autonomous agents to team with humans.


    ==========================================================================
    To enable developments in autonomy, a top Army priority, machine sensing
    must be resilient across changing environments, researchers said.

    "When we develop machine vision algorithms, real-world images are usually compressed to a narrower range, as a cellphone camera does, in a process
    called tone mapping," said Andre Harrison, a researcher at the U.S. Army
    Combat Capabilities Development Command's Army Research Laboratory. "This
    can contribute to the brittleness of machine vision algorithms because
    they are based on artificial images that don't quite match the patterns
    we see in the real world." By developing a new system with 100,000-to-1 display capability, the team discovered the brain's computations, under
    more real-world conditions, so they could build biological resilience
    into sensors, Harrison said.

    Current vision algorithms are based on human and animal studies with
    computer monitors, which have a limited range in luminance of about
    100-to-1, the ratio between the brightest and darkest pixels. In the
    real world, that variation could be a ratio of 100,000-to-1, a condition
    called high dynamic range, or HDR.

    "Changes and significant variations in light can challenge Army systems -
    - drones flying under a forest canopy could be confused by reflectance
    changes when wind blows through the leaves, or autonomous vehicles driving
    on rough terrain might not recognize potholes or other obstacles because
    the lighting conditions are slightly different from those on which their
    vision algorithms were trained," said Army researcher Dr. Chou Po Hung.



    ==========================================================================
    The research team sought to understand how the brain automatically
    takes the 100,000-to-1 input from the real world and compresses it to a narrower range, which enables humans to interpret shape. The team studied
    early visual processing under HDR, examining how simple features like
    HDR luminance and edges interact, as a way to uncover the underlying
    brain mechanisms.

    "The brain has more than 30 visual areas, and we still have only a
    rudimentary understanding of how these areas process the eye's image
    into an understanding of 3D shape," Hung said. "Our results with HDR
    luminance studies, based on human behavior and scalp recordings, show just
    how little we truly know about how to bridge the gap from laboratory to real-world environments. But, these findings break us out of that box,
    showing that our previous assumptions from standard computer monitors
    have limited ability to generalize to the real world, and they reveal principles that can guide our modeling toward the correct mechanisms."
    The Journal of Vision published the team's research findings, Abrupt
    darkening under high dynamic range (HDR) luminance invokes facilitation
    for high contrast targets and grouping by luminance similarity.

    Researchers said the discovery of how light and contrast edges interact
    in the brain's visual representation will help improve the effectiveness
    of algorithms for reconstructing the true 3D world under real-world
    luminance, by correcting for ambiguities that are unavoidable when
    estimating 3D shape from 2D information.

    "Through millions of years of evolution, our brains have evolved effective shortcuts for reconstructing 3D from 2D information," Hung said. "It's a decades-old problem that continues to challenge machine vision scientists,
    even with the recent advances in AI." In addition to vision for autonomy,
    this discovery will also be helpful to develop other AI-enabled devices
    such as radar and remote speech understanding that depend on sensing
    across wide dynamic ranges.

    With their results, the researchers are working with partners in academia
    to develop computational models, specifically with spiking neurons that
    may have advantages for both HDR computation and for more power-efficient vision processing -- both important considerations for low-powered drones.

    "The issue of dynamic range is not just a sensing problem," Hung said. "It
    may also be a more general problem in brain computation because individual neurons have tens of thousands of inputs. How do you build algorithms
    and architectures that can listen to the right inputs across different contexts? We hope that, by working on this problem at a sensory level,
    we can confirm that we are on the right track, so that we can have the
    right tools when we build more complex AIs."

    ========================================================================== Story Source: Materials provided by U.S._Army_Research_Laboratory. Note: Content may be edited for style and length.


    ========================================================================== Journal Reference:
    1. Chou P. Hung, Chloe Callahan-Flintoft, Paul D. Fedele, Kim
    F. Fluitt,
    Onyekachi Odoemene, Anthony J. Walker, Andre V. Harrison, Barry D.

    Vaughan, Matthew S. Jaswa, Min Wei. Abrupt darkening under high
    dynamic range (HDR) luminance invokes facilitation for high-contrast
    targets and grouping by luminance similarity. Journal of Vision,
    2020; 20 (7): 9 DOI: 10.1167/jov.20.7.9 ==========================================================================

    Link to news story: https://www.sciencedaily.com/releases/2020/09/200914112153.htm

    --- up 3 weeks, 6 hours, 50 minutes
    * Origin: -=> Castle Rock BBS <=- Now Husky HPT Powered! (1337:3/111)