What is AI?
A few examples of AI in the news and in our daily lives are digital assistants, GPS guidance, autonomous vehicles, and generative AI tools like Open AI's Chat GPT. Artificial intelligence, or AI, is a technology that allows computers and machines to simulate human intelligence and problem-solving capabilities. AI can perform tasks that would otherwise require human intelligence or intervention on its own or in combination with other technologies (e.g., sensors, geolocation, robotics).
Artificial Intelligence (AI) can carry out tasks that would normally require human intelligence or intervention, either by itself or in conjunction with other technologies (such as sensors, geolocation, and robotics). AI is used in the daily news and in our daily lives in a variety of ways, including digital assistants, GPS guidance, autonomous cars, and generative AI tools like Open AI's Chat GPT.
Methods and goals in AI
Symbolic vs. connectionist approaches
The symbolic (or "top-down") approach and the connectionist (or "bottom-up") approach are two different, and sometimes antagonistic, approaches to AI research. The top-down method analyzes cognition in terms of the processing of symbols, which gives rise to the symbolic label, to recreate intelligence without taking into account the organic makeup of the brain. Conversely, the bottom-up method entails building artificial neural networks that mimic the structure of the brain, hence the connectionist moniker.
To demonstrate the distinctions between these methods, take into consideration the task of developing a system that can detect the alphabet's letters when it has an optical scanner attached to it. When using a bottom-up method, an artificial neural network is usually trained by showing each letter individually.
This allows the network to be "tuned" to perform better over time. (Tuning modifies how responsively various brain circuits react to various stimuli.) A top-down strategy, on the other hand, usually entails developing a computer program that contrasts every letter with geometric descriptions. To put it simply, the top-down approach is based on symbolic descriptions, whereas the bottom-up approach is based on neural activities.
AI technology
Early in the twenty-first century, artificial intelligence was introduced outside of computer science departments into the general public due to quicker processing power and larger datasets, or "big data." Moore's law, which said that the power of computers doubled approximately every 18 months, remained accurate. The language model at the core of Chat GPT was trained on 45 terabytes of text, while the stock responses of the original chatbot Eliza fit neatly inside 50 kilobytes.
I) Machine learning
With the development of the "greedy layer-wise pretraining" technique in 2006, neural networks were able to handle more layers and, consequently, more complex problems. This technique was based on the discovery that training individual layers of a neural network was less complicated than training the entire network from input to output. A form of machine learning known as "deep learning," in which neural networks comprise four or more layers, including the initial input and the final output, was made possible by this advancement in neural network training.
Furthermore, these networks have the capacity for unsupervised learning, or the ability to find features in data without prior instruction.
Beyond games and picture classification, machine learning has numerous other uses. To quickly search through millions of potential molecules, the pharmaceutical giant Pfizer employed this technique to discover Palovid, a therapy for COVID-19. Google filters spam out of Gmail users' inboxes using machine learning. Credit card firms and banks train models to identify fraudulent transactions using previous data.
II) Large language models and natural language processing
Analyzing how computers can process and analyze words similarly to how humans do is known as natural language processing or NLP. NLP models need to use machine learning, deep learning, statistics, and computational linguistics to do this.
The exceptions and subtleties in language were not taken into account by the hand-coded, rule-based early NLP models. The next stage was statistical NLP, which used probability to determine the likelihood that various textual elements would have a given interpretation. Deep learning models and approaches are used by modern NLP systems to enable them to "learn" as they analyze information.
Language models that employ AI and statistics to anticipate a sentence's final form based on its current components are notable examples of contemporary natural language processing (NLP). The word "large" in a large language model (LLM) refers to the variables and weights that the model uses as parameters to affect the prediction output.
III) Autonomous vehicles
Artificial simulations are developed to evaluate the capabilities of autonomous cars to make them safe and efficient. Unlike white-box validation, black-box testing is used to generate these kinds of simulations. White-box testing can demonstrate the lack of failure because the tester is aware of the internal workings of the system being tested.
Black-box techniques require a more adversarial strategy and are far more complex. With these techniques, the tester only focuses on the system's exterior structure and design, not its internal design. These techniques look for flaws in the system to make sure it satisfies strict safety requirements.
IV) Virtual assistants
Virtual assistants, or VAs, assist users with a range of duties, such as scheduling, making and receiving calls, and providing directions when traveling. To get better in anticipating user requirements and behavior, these gadgets need a lot of data and learn from user input.
Apple's Siri, Google Assistant, and Amazon Alexa are the most well-known virtual assistants available today. Virtual assistants are more individualized than chatbots and conversational agents since they adjust to each user's unique behavior and learn from it to get better over time.
The 4 Types of AI
Researchers need to start developing more sophisticated definitions of intelligence and perhaps consciousness as they work to create increasingly sophisticated artificial intelligence systems. Four categories of artificial intelligence have been identified by researchers to provide clarification on these ideas.
1. Reactive machines
Artificial intelligence comes in the most basic form from reactive machines. These kinds of machines are simply able to "react" to the current situation; they are not aware of past occurrences. This means that they are unable to accomplish tasks outside of their restricted context and can only execute some advanced tasks inside a very small scope, like playing chess.
2. Limited memory machines
Only a limited knowledge of the past is available to machines with low memory. More than reactive machines, they can engage with their surroundings. When turning, observing oncoming traffic, and adjusting speed, self-driving automobiles, for instance, employ a kind of limited memory. However, because their recollection of past events is restricted to a small window of time, machines with limited memory are unable to build a comprehensive picture of the world.
3. Theory of mind machines
"Theory of mind" machines are an example of early artificial general intelligence. These kinds of machines would be able to comprehend other living things in the world in addition to being able to produce representations of it. This reality has yet to come to pass as of right now.
4. Self-aware machines
The most advanced artificial intelligence (AI) that exists theoretically is that of machines that are aware of themselves and the world around them. When most people discuss reaching AGI, they mean something like this. This is a far-off reality right now.
Conclusion
The field's achievements have brought about a turning point, and it is now critical to consider the drawbacks and dangers that the widespread use of AI is exposing. With deliberate deep fakes or just unaccountable algorithms generating mission-critical suggestions, people may be misled, subjected to discrimination, or even suffer bodily injury as a result of the growing ability to automate judgments at scale.
Biases and inequality that already exist are likely to be reinforced and even made worse by algorithms that have been trained on previous data. Though computer scientists and scholars examining cognitive processes have traditionally been the domain of computer scientists and academics investigating AI, it has become evident that all fields of human inquiry, particularly the social sciences, must be engaged in a wider discussion about the field's future.
0 Comments