THE FUTURE IS HERE

Artificial Intelligence can lead to a cognitive bias. How a Software Test Engineer can avoid it?

When we train AI systems using human data, the result can be human biased. Attend the webinar to know more.

A must attend webinar for software test engineers who want to learn about AI and software testing.

Webinar Date: 25 Feb 2019, 11am Pacific Time
******
URL: https://sqaweb.link/webinar678
******
We would like to think that AI-based machine learning systems always produce the right answer within their problem domain. In reality, their performance is a direct result of the data used to train them. The answers in production are only as good as that training data.

Data collected by a human such as surveys, observations, or estimates, can have built-in human biases. Even objective measurements can be measuring the wrong things or can be missing essential information about the problem domain.

The effects of biased data can be even more deceptive. AI systems often function as black boxes, which means technologists are unaware of how an AI came to its conclusion.

This can make it particularly hard to identify any inequality, bias, or discrimination feeding into a particular decision.

This webinar will explain:

1.How AI systems can suffer from the same biases as human experts
2.How that could lead to biased results
3.Examine how testers, data scientists, and other stakeholders can develop test cases to recognise biases, both in data and the resulting system
4.Ways to address those biases

Attendees will gain a deeper understanding of:

1.How data influences
2.How machine learning systems make decisions
3.How selecting the wrong data, or ambiguous data, can bias machine learning results
4.Why we don’t have insight into how machine learning systems make decisions
5.How we can identify and correct bias in machine learning systems

Speaker: Peter Varhol, Software Strategist & Evangelist