• From clickbait to transparency: Reimagin

    From ScienceDaily@1337:3/111 to All on Mon Jun 15 21:30:32 2020
    From clickbait to transparency: Reimagining the online world
    Behavioral science perspectives on an alternative Internet

    Date:
    June 15, 2020
    Source:
    Max Planck Institute for Human Development
    Summary:
    Behavioral science perspectives on an alternative Internet.



    FULL STORY ========================================================================== Polarization, conspiracy theories, fake news: What people see on the
    Internet is largely determined by the opaque algorithms of just a few corporations.

    That's a worrying development for democratic societies. But online
    environments could be designed in ways that promote autonomy and
    transparency, thereby fostering the positive potential of the Internet. A
    team of researchers from the Max Planck Institute for Human Development in Berlin, the University of Bristol, and Harvard Law School has approached
    this debate from the behavioral science perspective and proposed
    interventions capable of promoting a more democratic Internet. Their
    findings have been published in Nature Human Behaviour.


    ========================================================================== You're scrolling through your YouTube, Facebook, or Twitter feed when a
    video appears: a report on secret political deals, media conspiracies, or
    fake scientific studies. Everything seems to be backed up by facts. The
    video has hundreds of thousands of clicks and appears to be gaining
    traction worldwide.

    And plenty of viewers seem to agree with it, having given it a "thumbs
    up." But what exactly does all this mean? Does it make the video
    credible? How many viewers watched the video to the end; how many
    left midway? And why did the video appear in your feed in the first
    place? The online platform knows many of the answers, but it doesn't
    share that information. And no other cues are provided that could help
    you to assess the content, quality, or credibility of the video.

    According to the Reuters Institute Digital News Report 2019, more than
    half (55%) of the world's Internet users now use social media or search
    engines to keep up with the news. In other words, public opinion is shaped largely by online content, which is in turn curated by algorithms. "The
    aim is to keep users happy for as long as possible so that they stay on
    the platform. That's achieved by providing entertainment and establishing
    a feeling of well-being - - which probably explains why many platforms
    don't have 'dislike' buttons that allow users to down-vote content. The
    feeling being conveyed is: You're right.

    That may be harmless when we share private content such as holiday photos,
    but it distorts the picture when what's being spread are radical opinions
    and untruths," says co-author Stephan Lewandowsky, Professor of Cognitive Psychology at the University of Bristol.

    Another problem is that newsfeed algorithms only show users content that
    based on their previous online behavior -- they are likely to agree
    with. Other points of view tend not to be shown at all. This creates
    networks of like- minded users, which reinforce shared opinions and create
    the illusion of broad support, even if an opinion is actually not widely
    held. But how can users distinguish fake news from facts? And how do
    online environments need to be designed to support these efforts? "The interventions we propose are aimed at empowering individual users to
    make informed and autonomous decisions in online environments -- ideally, without having to rely on independent fact checkers.

    The architecture of online environments influences users' behavior. To
    change that environment for the better, we need to understand human
    behavior and take that behavior into account in design choices," says
    Philipp Lorenz-Spreen, lead author of the study and researcher at the
    Max Planck Institute for Human Development.

    The research team has identified a number of ways to help users evaluate
    the quality of Internet content and to understand the mechanisms behind algorithms -- without curtailing the freedoms central to the original philosophy of the Internet. These possibilities are technologically
    feasible but so far largely untapped. The research team has developed
    specific recommendations to empower individuals online, drawing on two approaches from the behavioral sciences: nudging and boosting.

    Nudging aims to steer people's behavior by highlighting important
    information without imposing rules or bans. Boosting aims to enhance
    people's decision- making competence, enabling them to navigate situations autonomously and make better decisions. According to the researchers,
    it is important to take a multi-pronged approach to reach as many people
    as possible. Nudging could be used, for example, to indicate whether
    content meets certain quality criteria - - such as whether it stems
    from trustworthy sources. This would both prompt users to check content thoroughly and create an incentive structure for content producers to meet
    the required criteria. Including hyperlinks to verified sources would
    be another step. Wikipedia could serve as a model here: The references
    and cross-references it provides help to contextualize the information presented. Twitter recently took a step in this direction and started
    flagging some tweets with a fact-check warning -- including a tweet by
    Donald Trump on the subject of postal voting.

    Nudges could also communicate additional information about other
    users' online behavior, giving further context on how others evaluate
    content. For example, the number of likes could be set in relation to the
    total number of readers: "4,287 out of 1.5 million readers liked this
    article." The information that there were 44,000 clicks on an article,
    but that only 3,000 users read it to the end could be a better indicator
    of quality than numbers of clicks and likes alone. It has been shown
    that transparent numerical formats can improve medical decisions. Why
    shouldn't the same apply to the Internet? This approach could prevent
    the formation of echo chambers in which groups of people wrongly believe
    that their opinion is widespread and accepted.

    Another possibility might be to make it slightly more difficult for
    users to share information when an article fails to cite external
    references. For example, users might be required to click past a pop-up
    window containing a warning that references are missing. Another type
    of nudge might target how content is arranged in browsers -- that is,
    the way a newsfeed sorts content.

    Clearly differentiating between types of content, such as opinions,
    news, or posts by friends, can make newsfeeds more transparent.

    Boosting, in contrast, aims to enhance user competence in the long
    term. This could, for instance, mean enabling people to adjust how their newsfeeds are sorted -- letting them change their settings to reflect
    which factors are most important to them and which content they want
    to see first. Social networks could be obliged to indicate why some
    content is weighted more heavily and shown first, while other content
    is given lower priority. Another example of a boost would be to present
    the original source of a piece of information and the route by which
    it reached a user. Simple decision trees could guide users through
    a step-by-step process of checking the origin, context, and quality
    of online content. In other words, users would be given the tools to
    become fact checkers themselves. In the long term, they would learn
    to recognize patterns, to critically assess content, and to identify manipulation more quickly.

    "The interactive nature of social media could be harnessed to promote
    diverse democratic dialogue and foster collective intelligence. Our
    aim is to find ways to strengthen the Internet's potential to inform decision-making processes in democratic societies, bolstering them
    rather than undermining them. Global problems like climate change and
    the coronavirus pandemic require coordinated, collective solutions. And
    that makes a democratically interconnected online world crucial," says
    Ralph Hertwig, Director of the Center for Adaptive Rationality at the
    Max Planck Institute for Human Development.


    ========================================================================== Story Source: Materials provided by
    Max_Planck_Institute_for_Human_Development. Note: Content may be edited
    for style and length.


    ========================================================================== Journal Reference:
    1. Philipp Lorenz-Spreen, Stephan Lewandowsky, Cass R. Sunstein, Ralph
    Hertwig. How behavioural sciences can promote truth, autonomy and
    democratic discourse online. Nature Human Behaviour, 2020; DOI:
    10.1038/ s41562-020-0889-7 ==========================================================================

    Link to news story: https://www.sciencedaily.com/releases/2020/06/200615115801.htm

    --- up 20 weeks, 6 days, 2 hours, 34 minutes
    * Origin: -=> Castle Rock BBS <=- Now Husky HPT Powered! (1337:3/111)