arrow-next arrow-scroll arrow-short arrow check close dropdown facebook instagram linkedin search twitter

Blog

Can AI be Used for Social Good? Q&A with Michael Tjalve: Beyond the Classroom Series

Date: September 19, 2018

Why study at GIX?

Whether it’s by developing high-impact projects in collaboration with GIX industry and nonprofit partners or hearing exclusive talks from successful CEOs of companies and nonprofits like Accolade and PATH, the value of the GIX graduate student experience goes beyond the four walls of the classroom.

In our Beyond the Classroom blog series, we’re showcasing the wide range of networking, learning, and development opportunities that exist for GIX MSTI and Dual Degree students.

Introducing Michael Tjalve

Over the summer, we invited some of our industry mentors to host exclusive weekly seminars for GIX students focused on career development, the future of work, and the evolution of rapidly growing fields and industries that are incorporating emerging technologies. Michael Tjalve, GIX mentor, Principal AI Architect for Microsoft, and UW Affiliate Assistant Professor for the Linguistics Department, was one of the four mentors that presented to students in the Career Development series.

Here are his perspectives on the evolution of AI, including his response to the question no one can get a straight answer to – is AI truly the biggest threat of the future? Or will it leveraged for social good to have a positive global impact?

To start, how would you define AI?

Simply put, AI is a set of computational components that are able to perform tasks which we commonly associate with human intelligence. That includes speaking, seeing, and hearing. One key aspect of AI systems is that they learn over time – absorbing and processing new signals, new data, and new input every day – just like humans do.

Are there industries and fields where AI is already in use that most people don’t know about?

Yes, AI is becoming an integrated part of our lives to the extent that we often don’t notice when we’re interacting with AI systems. Using search engines and spell checking are two good examples. Recommendations services – like suggested music on Spotify or movies on Netflix – are also a fairly popular application of AI.

With the rise of virtual assistants like Siri, Alexa, and Cortana, we’re seeing a new category of interactions with AI. Within individual domains, AI-powered experiences can be very convincing so it’s important to make it clear to the user when they’re interacting with an AI system.

Does that mean we’re already at the point where we can’t tell the difference between a human and an AI system?

That’s right, but only for narrowly limited domain applications and not consistently so. The notion of “Narrow AI” refers to an AI system that operates within the specific application, domain, and perimeters it’s been created for. For several Narrow AI systems, we have now reached human parity. Speech recognition, machine translation, and the ability to read text and then intelligently answer or ask questions about that text, are all examples of areas where AI has reached human parity.

There’s also research in Broad AI, also known as Artificial General Intelligence or AGI. AGI is the ability for an AI system to learn more broadly across domains, similar to how humans learn. There’s an ongoing debate in the AI community about whether we’ll ever see full AGI. I personally don’t believe we have any evidence to suggest that it’ll happen any time soon.

What’s the most exciting way AI technology has been applied so far to a product that you’ve seen in the market or that is in development?

There are many, but I would say the most inspiring applications of AI I’ve experienced are within the field of AI for social good. There are many different areas within that field, but I’m particularly fascinated by the application of AI in the context of education and learning. With the recent advances in AI, we can now design learning experiences that were impossible just a few years ago.

The user experience is evolving, and the modes of interaction are changing, allowing users to interact with content and technology in new ways. For learners, there’s potential for a lot of positive outcomes.

When learners can select their preferred mode of interaction with course content, they’re more likely to experience a boost in both motivation and retention. Leveraging speech and language as an interface is an enabling component in achieving this.

We’re currently working on a learning companion for refugees to improve access to educational resources. Based on conversational AI and a chatbot interface, we’re partnering with nonprofits and co-creating with displaced youth to make sure what we build provides real value for them.

I believe AI in the context of education and learning is going to be a key driver across many different segments of societies all around the world.

What do you think is the most intriguing problem that AI might be able to solve in the future?

That’s a good question. In general, any major challenge that has generated a lot of data is a good candidate for leveraging AI as part of the solution – whether that’s by extracting insights, training predictive models, or identifying new ways of thinking about a challenge.

Let’s take a look at two examples.

The AI for Earth program launched last year at Microsoft. The program strives to provide AI tools and resources to help organizations solve global environmental challenges. Whether it’s through gaining insights related to the causes and consequences of climate change or building solutions to address the loss of biodiversity, there’s no shortage of global challenges that can leverage AI assets.

Another example is AI-assisted healthcare. AI-powered systems have already started to be used for both diagnosis and prevention. They’re able to carry out diagnoses with a surprising degree of precision. By automating parts of the analysis and diagnostic process, allow doctors to focus on validation and adjustment of the AI output so they can spend more face-to-face with their patients.

There’s a lot of fear surrounding the capabilities of AI, in particular, its ability to outsmart humans and “take over”. Do you feel as though these fears are grounded and if so, how do we exercise caution in the application of AI?

AI has tremendous potential for positive impact, but we can’t just sit and hope that it’ll happen organically. There are valid concerns about the application and influence of AI.

It’s worth keeping in mind that AI doesn’t have any opinions. The notion of “evil AI” is fueled more by Hollywood than by reality. What’s important is the values we impose on the AI systems when we create them. It’s a principle called value alignment. Value alignment is the process of designing and building AI systems whose actions are ethical, and aligned with human values and goals.

How you define ethical actions and human values is obviously open for some interpretation when you look just one layer below common sense definitions. That’s why I believe we need to have an open and ongoing debate across all stakeholders, including AI providers, academia, lawmakers, and end users.

I’m less concerned about AI systems being deliberately designed to work against human values, although that can happen. The more relevant concern, in my opinion, relates to the unexpected consequences of AI. Lots of work, across several dimensions, needs to be done here. For example, we need to actively identify and counter bias in the data used for model training. Likewise, it’s important to continually anticipate and evaluate the impact of AI across the ecosystem in which it’s used – including the way it ripples throughout society.

Let’s look at the job market as an example. AI’s ability to automate predictable and repeatable processes is one of its greatest strengths. Combined with the fact that AI doesn’t get tired and doesn’t take a sick day, AI-powered solutions are already impacting the workforce. Some jobs that are easily automatable are at risk of being replaced by AI, others will evolve and change, and new jobs that didn’t previously exist will be created. I predict the most common impact will involve AI handling more individual tasks of our current jobs, rather than replacing our jobs altogether.

Since the nature of the job market will shift, future professionals will need to learn new skills to remain competitive and ensure employability. This means access to affordable reskilling and continuous learning opportunities will be critical. I believe AI will play an important role in enabling this.

To sum up, what are some of the limitations of AI? And how can we design a future where AI technology and humans collaborate together in a way that’s positive and exciting, and that are in the best interests of the broader society?

AI is nothing without data, and herein lies the most significant limitation of AI. If we don’t have enough data or if we don’t have the appropriate kind of data, we end up with AI models that make unreliable predictions.

The most beneficial outcomes of AI will be through leveraging AI to augment human creativity. AI and humans can collaborate and complement one another if grounded in the strength of our differences. Humans are better at problem resolution, empathy, collaboration, critical thinking, and creativity – all of which are areas AI isn’t good at.

There’s an important, ongoing effort to democratize AI by making tools and resources available to a wider audience. This will help put conditions in place for a more equitable future –  a future where people across society can reap the benefits of AI.

I’m excited about what’s ahead. As long as we’re clear-eyed and thinking about the broad impact when meeting challenges head on, I believe that AI is poised to play an important role in our future. From battling climate change to developing novel advances in medicine and education, AI will provide value across society and across the world.

About Michael Tjalve

Michael Tjalve has worked in the field of AI for almost two decades. He currently works as a Principal AI Architect at Microsoft and he’s an Assistant Professor at the University of Washington where he teaches conversational AI. He’s also a mentor at GIX where he works with students on career development and research collaboration in the area of AI. He is passionately engaged in several AI for social good initiatives.


Want to learn more about GIX graduate programs?

If you’re interested in finding out if you would be a good candidate for our MSTI or Dual Degree programs, sign up to one of our upcoming information sessions offered either in person or online.

Interested in becoming a mentor?

If you’re someone willing and excited to help educate the future innovation leaders of our global society, please reach out! Even with our rapidly growing network of seasoned entrepreneurs and experts in their field, we’re constantly looking for more professionals passionate about sharing their expertise with the next generation of innovators. Learn more here.