Lets look at whether artificial intelligence can complement rather than replace humans … internet age validation … and Google’s fine of €500 million in France
Every once in a while, a story just encapsulates everything that its entire field hopes and fears. The following is an example.
GitHub is a platform for developers to collaborate internationally on coding projects with colleagues, friends, and strangers. This site hosts more source code than any other in the world, and it’s a vital part of many organizations’ digital infrastructure after being purchased by Microsoft in 2018.
The company announced Copilot, an AI tool, late last month. Nat Friedman, chief executive, explained it as follows:
A new AI pair programmer that helps you write better code. It helps you quickly discover alternative ways to solve problems, write tests, and explore new APIs without having to tediously tailor a search for answers on the internet. As you type, it adapts to the way you write code – to help you complete your work faster.
Basically, Copilot will sit on your computer and automate part of your coding chores. Programming has long been a source of long-running jokes in the coding community about finding similar problems online and copying them into your program. You can now automate that process with an artificial intelligence.
One of the most remarkable things about Copilot is its ability to solve so many common problems. Several programmers who said it is as striking as the first time texts from GPT-3 appeared on the web. Remember, it’s that super-powerful AI that generates text like:
The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.
We often think of the future as one in which humans are essentially unnecessary when imagining how technology will change our world. It is easy to imagine that AI systems can do everything that a person can do with increasing competence, leaving the human who used to do the job with idle hands, as they are able to tackle increasingly complex domains.
It depends on your viewpoint of whether such a scenario would be a nightmare or a utopia. Could AIs do their jobs in the place of large numbers of people, so they are freed up to live leisurely lives? Could their former managers reap the benefits of the increased productivity an hour worked if they would be unemployed and unemployable instead?
We won’t always be replaced by AI, though. Rather, fields are exploring how they can use technology to complement humans, extend their capabilities, and take the drudge work out of their work so they can do what they do best.
A centaur is a concept that creates hybrid workers who have an AI back half and a human front half. Autocorrect on an iPhone isn’t as futuristic as it sounds: it simply offloads the laborious task of typing incorrectly to an artificial intelligence.
The dystopian vision is often close to centaurs. To eke out every ounce of efficiency improvement, Amazon’s warehouse employees have been systematically pushed along similar paths. Through guidance, tracking, and assessment, the workers are guaranteed to always take the best route through the warehouse, pick the right items, and do so at a consistent rate high enough to let the company profit. Their job is still to do things only humans can do, but in this instance, it’s to do things that require “working hands”.
Centaurs, however, have already proven themselves in other fields. A special format has existed for human players who work with chess computers for years in the competitive world of chess. In general, the computers play better than either would alone: they avoid stupid errors, keep the game fresh, present a list of high-value options for the human player, while allowing for lateral thinking and unpredictability.
Copilot hopes to help bring GitHub to this future. This allows programmers to stop wasting their time on well-documented tasks, such as sending a valid request to Twitter, or pulling the time in hours and minutes from a system clock, and start exploring areas where no one else has ventured before.
Copilot is intriguing because the potential isn’t the only thing that interests me. It’s also that all four traps plaguing the industry are represented by one release from the company.
Github’s own data platform served as the training data for Copilot. In other words, it was taught code based on prompts provided by hundreds of millions of developers around the world.
If the problem is a simple coding problem, that’s fantastic. If, for example, the prompt is a secret password that you use to log in to a user account, that’s less helpful. Nevertheless:
The [Airbnb] link provided by GitHubCopilot has a key that still works (and stops working when it is changed).
There are still functional and valid API keys for [Sendgrid] that are being leaked by AI.
Today, the vast majority of artificial intelligence isn’t coded but taught: you tell it to look at an enormous amount of information and work out for itself how it relates to that information. Because Github’s repository contains a lot of code, Copilot can learn what code looks like by looking at examples. In addition, there are examples available for Copilot to see how an API key mistakenly uploaded in public might look like – and then share it onwards.
This type of leakage occurs more frequently in passwords and keys, but they also point to the underlying concern about AI technology: does it create things, or does it simply remix already done work by humans? Would humans be entitled to a say in the use of their work if the latter were the case?
“Across the machine learning community, training machine learning models on open data is considered fair use,” says GitHub in a FAQ.
A much softer claim was originally made that such practices were “commonplace”. A page on GitHub was updated after coders from around the world complained about GitHub’s infringement on their copy right. However, it is interesting to note that the biggest opposition came from the open-source community rather than large private companies concerned about their work being reused. GitHub did not enforce the copyright requirement against users of open-source code – something those developers relied on often.
According to legal professor James Grimmelmann, GitHub is probably right about the law. There is a good chance the company will not be the last to introduce a groundbreaking new AI tool, only to face awkward questions about how it obtained the data used for training.