Technology ethics and law dealing with our future

Technology, ethics and law: dealing with our future based on artificial intelligence

Digital technology advances every day. The great computing power of the cloud and an immense accumulation of data have come together. Artificial intelligence (AI) is growing around us and computers will behave more and more like humans.

How will this global transformation affect individuals and society? And what laws should be enacted to guide the progress of progress in a digital world?

For Brad Smith, the president of Microsoft, it begins with a fundamental ethical question about "not only what computers can do, but also what they should do."

"That is the question that all communities and all countries of the world will need to ask themselves over the next decade," he told lawyers, policy makers and academics during a visit to Singapore, a city-state with an impressive track record of adoption of new technologies and innovation.

While developers, quite rightly, are excited about the innovative products they are creating, "we cannot afford to look to the future with uncritical eyes," he warned.

As computers begin to act more like humans, there will be social challenges. "Not only do we need a technological vision for AI, we also need an ethical vision for AI," he said.

These ethical problems should not only be the focus of "engineers and technology companies ... Indeed, they are for everyone" because an increasing number of people and organizations are creating their own AI systems using the technological "building blocks" that companies , like Microsoft, Produce.

Smith, who spoke at the TechLaw.Fest conference and at the Lee Kuan Yew School of Public Policy at the National University of Singapore, said that the central city of Southeast Asia "is a great place to see the future and the dispersal of creation of AI. is and will be. "

Earlier this year, Smith and Harry Shum, executive vice president of Microsoft AI and Research Group, co-authored The Future Computed: Artificial Intelligence and its role in society. And during his visit to Singapore, Smith explored six key ethical principles to consider.

Justice. “What does it mean to create technology that begins to make decisions like human beings? Will the computers be fair? Or will computers discriminate in a way that lawyers, governments and regulators consider illegal? … If the data set is biased, then the AI ​​system will be biased. ” Part of this is an urgent need to correct a current lack of diversity in the technology sector dominated by today's men.

Reliability and security.

 Most current product liability laws and regulations are based on the impact of the technologies invented a century or more ago. These should evolve to address the rise of computers and AI. "People need to understand not only what AI can do, but also the limits of what AI can do." This will ensure that humans are aware and thorough tests are performed.


“As we read about the problems in the news, and think about where AI and other information technologies go, there are some problems that are more important. We will have to start by applying the privacy laws that exist today, but then think about the gaps in these legal rules ... so that people can manage their data, so that we can design systems to protect against bad actors and ensure the responsible use of data ".
Not only do we need a technological vision for AI, we also need an ethical vision for AI.
Inclusivity “Many people have a disability. AI-based systems will improve your day or make your day worse. It all depends on whether the people who design the AI ​​systems design them taking into account their needs. "

The previous four areas are based on the following two points:

Transparency. "There is a doctrine of 'explainableness' that is rapidly emerging in the field of AI." In other words, people who create AI systems have a responsibility to ensure that those who use or are affected by those systems know "how algorithms actually work. And that will be a very complicated issue."

Finally, there is the "fundamental fundamental principle" of accountability. “As we think about empowering computers to make more decisions, we must fundamentally ensure that computers remain accountable to people. And, we must ensure that the people who design these computers and these systems remain accountable to other people and the rest of society. "

These six areas complement each other. But still "they don't do everything

Post a Comment

0 Comments