2020 trends in the AI space
In 2020, data privacy laws paired with public outcry will result in moves to monetize personal data.
We all know the story. Data is accumulating rapidly and serves as the fuel that keeps the well-oiled engine of artificial intelligence and advanced analytics humming. The other story around data is the data rights one, where the public is pushing for their personal data to remain private and personal.
The next frontier, which we believe will come to the fore in 2020, is on the horizon and within reach: owning your own data. Although we’ve seen attempts in the past (e.g., personal.com in 2011), the time is ripe for individuals to have access to their personal data, no matter where it’s stored or whatever format it’s in.
This trend is driven by two key factors: public outcry and legislation.
- Public outcry
People are fed up with data breaches. Questionable uses of personal data, such as colleges quietly ranking prospective students based on personal data, and companies using period tracking apps for advertising purposes (see this Twitter thread), make the public think twice about how their data is used and shared. In short, the people are fed up with the current state of affairs, and want control and transparency of their data.
- Government lays down the law
Governments and governmental bodies, such as the European Union, have put legislation like GDPR in place to protect people’s personal data, giving them peace of mind that it will not be used against their will. We’ve seen this topic become a hot button issue in the United State’s presidential debates.
Source: Andrew Yang's Website
Here’s where software companies come in...it’s all well and good to access and maybe even monetize your data. However, to make this a reality, companies must provide software and services that help users access their personal data. In addition, we need a way to make companies pay for what they have already collected. The alternatives are to force them to purge their records and start fresh or to let them have what they've got for free.
At the end of this October, we saw how Mozilla and Element AI (AI enterprise software provider) are working on "data trusts,” which may become key as AI works its way into data collection solutions. Startups like People.io and Digi.me helps individuals take control of their personal data, giving them the opportunity to monetize it.
On the educational front seven British universities collaborated to create a Hub of All Things (HAT), which serves as a cloud-based microserver; it acts like a mini fortress for all your personal data. You decide how to "spend" your data because you own the database.
In 2017, the startling headline, “I asked Tinder for my data. It sent me 800 pages of my deepest, darkest secrets” graced the pages of The Guardian. In 2020, this will not be news. You, you, and you, whoever you may be, will be able to do this easily, effectively, and efficiently.
Welcome to the future.
AI must be understandable, or consigned to the bin
AI algorithms have often been viewed as impenetrable black boxes. All users see is the input and the output, with little to no understanding of why a particular output was produced. 2019 saw a rise in research on explainable AI, along with open source tools allowing curious users to peek inside these algorithms.
In 2020, we expect to see these techniques consumerized, with companies like Fiddler and Kyndi leading the charge. Explainability will be applied throughout data journeys, giving users the ability to understand their data in all of its shapes and forms.
The demand for transparency can be heard from around the world. Global governmental bodies, such as the Organisation for Economic Co-operation and Development (OECD) and G20 (or Group of Twenty) have taken note. They both drafted reports and guidelines for the public about how transparency can be properly baked into products and processes.
Explainability is not just about opening up the algorithmic black box. The push toward transparency is about understanding data throughout the data journey, from integration to insights.
We have already seen movement in this direction with enterprise solutions like Tableau’s Explain Data feature. Tableau realizes that users are looking to deeply understand their data, not just create flashy visualizations. Many startups are also focused on the problem, including Kyndi and Fiddler.
The future is bright
At G2, we think the next “big thing” will be the rise of explainable answers, or the ability for users to understand why a particular answer or insight is produced by software. Up until this point, we have been focused on the what and the how of data. In the near future, we will desire answers about why our data is what it is and why specific insights or answers were generated.
In a world where data is big and the ethical quandaries are bigger, explainability will move from being a desirable feature to a necessary one.
With this in mind, we expect to see a slew of explainability-as-a-service startups, solely focused on the problem of explainability and looking at how companies develop AI systems that justify the reasoning behind their conclusions and results. We look forward to see what the future has in store.
About face on facial recognition
As backlash continues around facial recognition technology, local and national governments will wake up to an onslaught of class action lawsuits. The result: More legislation and transparency around the use of facial recognition technology.
- Public outcry
The situation: For the sake of safety, public places (such as concert venues and public areas), where many people traverse, are monitored by facial recognition technology powered by artificial intelligence.
The problem: Although the technology is efficient, it is not always effective and can be marred with racial bias. Since many of the algorithms are trained on datasets with primarily white males faces, minorities, such as African-Americans, are more likely to be falsely identified by the systems.
- The racial bias issue is compounded with the perennial problem of data privacy (see above). Thus, when you can be identified by your face (along with your finger veins, gait, saliva, and much more), people feel their private data isn’t theirs. It feels like a violation of privacy and makes people feel particularly uncomfortable.
- Organizations listen
Groups, such as BAN FACIAL RECOGNITION and the ACLU are listening to the public; they argue that this technology is unreliable, biased, and a threat to basic rights and safety. The ACLU found that 76% of Massachusetts voters do not think the government should be able to monitor and track people with this technology.
- Government acts
As of yet, the United States has no laws or regulations governing the sale, acquisition, use, or misuse of face surveillance technology by government agencies.
Great proliferation does not necessarily come with great regulation. Facial recognition is not currently governed by a specific legal framework in the United Kingdom, meaning that private companies can use it without publicly declaring the move or notifying authorities. The United Kingdom is exploring a moratorium on automated facial recognition technology.
Governmental challenges with facial recognition
Although the public’s cry is loud, we’ve only heard a small voice from government in terms of firm action against the technology. This is compounded by findings in a recent NASCIO study, which found only 9% of state CIOs have policy to ensure responsible use of AI.
But, there is a saving grace. According to a recent Deloitte survey, public sector early adopters are more concerned about ethical risks than any other industry. Their moral intuition is moving them to act, despite lack of policy. Let’s pause and present a simple formula:
Public outcry + governmental reaction = a great reckoning of the tech companies who are supplying and using the technology.
We have already seen this equation in action. In June, the AI & Policing Technology Ethics Board at Axon (a creator of police body cams and more) released a report with actionable recommendations regarding facial recognition technology.
According to Axon's First Report of the Axon AI & Policing Technology Ethics Board:
"Face recognition technology should not be deployed until the technology performs with far greater accuracy and performs equally well across races, ethnicities, genders, and other identity groups."
The future ahead for facial recognition
In the United States, we see glimmers of hope in the discussions arising from presidential debates. As the use of facial recognition technology continues to draw criticism from the public, Democratic presidential candidates (e.g. Andrew Yang, Bernie Sanders, and Elizabeth Warren) are starting to articulate how they’d handle the tech if elected.
Since we are not in the business of predicting presidential primaries, who knows how this will play out.
However, this rhetoric, coupled with a recent California Senate bill, which places a three-year moratorium on the use of facial recognition in police body cameras, gives us hope that the standstill will come to a halt.