So
let’s know first what is bias in artificial
intelligence?
Biases find their way into
the AI systems we design, and
are used to make decisions by many, from governments to businesses. Bad data
used to train AI can contain
implicit racial, gender, or ideological biases.
Bias in AI systems could erode
trust between humans and machines that learn.
Now what does ‘ethical
use of AI’ really mean?
The report
states that this is mostly to do with how businesses collect personal data, and
if they are overly reliant on machines when making crucial business decisions
(especially in banking and insurance). A more ethical approach to the use of
artificial intelligence could be achieved through more regulation and more
transparency. People want to know when they’re being managed by an AI. To achieve this,
organizations need to focus on putting the right governance structures in
place, they must not only define a code of conduct based on their own values,
but also implement it as an ‘ethics-by-design’ approach, and, above all, focus
on informing and empowering people in how they interact with AI solutions.
Typically,
bias seeps into the AI process in one of three ways: in design, data, or
selection.
· Creating the right design
Problems normally
arise in the design process if those goals are not properly framed so as to
guarantee fairness, since an algorithm can set parameters that encourage bias.
Companies can work towards eliminating bias by avoiding a framework that is too
focused on a particular company goal and making sure to build fairness into the
algorithm itself.
· Feeding in the right data
AI is an innovative
process where a machine is trained by feeding it large amounts of data. But if
those datasets under- or over-represent certain groups, or use out-of-date or
skewed historical records or societal norms, then any outcomes will necessarily
be biased. For example, if a machine is trained to identify the best college
recruits based on the backgrounds of its top students, good candidates outside
of these criteria could be excluded. Similarly, algorithms using hiring data to
vet candidates of a certain age could unfairly eliminate qualified candidates.
· Bias in selection
For example in
healthcare they may look at weight, age, and medical history. Bias can easily
infiltrate the selection process if companies put too much emphasis on certain
attributes and how they interact with other data fields.
But
everything has misconceptions, same way there are main 4 misconceptions about
ethics and bias in AI:
1. Misconception:
Engineers are only responsible for the code.
2. Misconception:
Humans and computers are interchangeable.
3. Misconception: We
can't regulate the tech industry.
4. Misconception: Tech is only about
optimizing metrics.
There
are many question comes in our mind when we talk about bias and artificial
intelligence, some of them are listed below:
Q: Are there any ethical implications
that businesses need to consider when introducing AI applications into their
businesses?
I think that what
businesses need to be mindful of is what data they feed into their AI
algorithms. As we’ve seen in several unfortunate examples, if you don’t train
your AI with a wide set of data it can significantly amplify bias in the
end-product. For example, if your facial recognition programme is only trained
on white men, then you’re going to see some unbalanced outcomes.
If businesses are
going to implement these technologies, its leaders have a responsibility to
ensure that the algorithms they’re creating are reflective of the world at
large. This goes beyond technology and reflects the need for wider diversity
across the business, from developer teams right up to the business leadership
team.
Q: Do you think that AI should be
regulated? If so, what should this look like?
I think that when we
consider regulation, we should think of it through a lens of purpose and
intent. We need to ask ourselves whether the end-usage application of a
technology is “good” or not and then build an ethical framework out from there.
I see this as a more sustainable approach than regulating the development of
technology that could have a significantly good effect on society and the
progression of humanity.
Q: Democratization of AI is a question
that often comes up in discussions around the ethics of AI. Do you think it’s
possible for it to become a technology that benefits everyone?
For me the answer
lies in the strength of the educational curriculum and how much it will prepare
today’s learners for tomorrow’s work. That isn’t to say we should focus purely
on STEM to the detriment of more liberal arts. To the contrary, to ensure that
the technology we create is used responsibly and for the good, we cannot lose
sight of the subjects that reflect our humanity. In the next era of
human-machine partnerships, at the same time as encouraging our children to
count and to read, we must also encourage a diversity in their thinking. That
means recognizing the importance of the arts, humanities, and social sciences
in nurturing creative, critical thinkers. Core skills like emotional
intelligence and moral reasoning are vital if we are to train out the bias and
single-minded thinking that exists in our industry and in our AI programmers.
For more details you
can wesite:
https://securityboulevard.com/2020/02/the-challenge-of-bias-in-ai-creating-ethical-guidelines/
Great information π
ReplyDeleteThankyu so much
DeleteGreat work
ReplyDeleteThankyu for reading the blog
DeleteToo gud sidhant
ReplyDeleteThank you ππ
DeleteAmazing content,
ReplyDeleteThanks for reading
DeleteGreat info man....keep it up
ReplyDeleteThank you so much
DeleteNice work
ReplyDeleteWell said
ReplyDeleteInformative blog
ReplyDelete