Artifical Intelligence related research institute

Artificial Intelligence

Artificial intelligence (AI) is a branch of computer science and engineering that deals with intelligent behavior, learning, and adaptation in machines. Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, speech, and facial recognition.
As such, it has become an engineering discipline, focused on providing solutions to real life problems, software applications, traditional strategy games like computer chess and other video games.

AI divides roughly into two schools of thought: Conventional AI and Computational Intelligence (CI), also sometimes referred to as Synthetic Intelligence to highlight the differences.

Conventional AI : mostly involves methods now classified as machine learning, characterized by formalism and statistical analysis. This is also known as symbolic AI, logical AI, neat AI and Good Old Fashioned Artificial Intelligence (GOFAI).

Methods include:
1. Expert systems: apply reasoning capabilities to reach a conclusion. An expert system can process large amounts of known information and provide conclusions based on them.
2. Case based reasoning
3. Bayesian networks
4. Behavior based AI: a modular method of building AI systems by hand.

Computational Intelligence involves iterative development or learning (e.g. parameter tuning e.g. in connectionist systems). Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and soft computing. Methods mainly include:
1) Neural networks: systems with very strong pattern recognition capabilities.
2) Fuzzy systems: techniques for reasoning under uncertainty, have been widely used in modern industrial and consumer product control systems.
3) Evolutionary computation: applies biologically inspired concepts such as populations, mutation and survival of the fittest to generate increasingly better solutions to the problem. These methods most notably divide into evolutionary algorithms (e.g. genetic algorithms) and swarm intelligence (e.g. ant algorithms).

hybrid intelligent systems attempts are made to combine these two groups. Expert inference rules can be generated through neural network or production rules from statistical learning such as in ACT-R. It is thought that the human brain uses multiple techniques to both formulate and cross-check results. Thus, systems integration is seen as promising and perhaps necessary for true AI.


History of artificial intelligence : Early in the 17th century, René Descartes envisioned the bodies of animals as complex but reducible machines, thus formulating the mechanistic theory, also known as the "clockwork paradigm". Wilhelm Schickard created the first mechanical digital calculating machine in 1623, followed by machines of Blaise Pascal (1643) and Gottfried Wilhelm von Leibniz (1671), who also invented the binary system. In the 19th century, Charles Babbage and Ada Lovelace worked on programmable mechanical calculating machines.

Bertrand Russell and Alfred North Whitehead published Principia Mathematica in 1910-1913, which revolutionized formal logic. In 1931 Kurt Gödel showed that sufficiently powerful consistent formal systems contain true theorems unprovable by any theorem-proving AI that is systematically deriving all possible theorems from the axioms. In 1941 Konrad Zuse built the first working program-controlled computers. Warren McCulloch and Walter Pitts published A Logical Calculus of the Ideas Immanent in Nervous Activity (1943), laying the foundations for neural networks. Norbert Wiener's Cybernetics or Control and Communication in the Animal and the Machine (MIT Press, 1948) popularizes the term "cybernetics".


1950s :

The 1950s were a period of active efforts in AI. In 1950, Alan Turing introduced the "Turing test" as a way of operationalizing a test of intelligent behavior. The first working AI programs were written in 1951 to run on the Ferranti Mark I machine of the University of Manchester: a draughts-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz. John McCarthy coined the term "artificial intelligence" at the first conference devoted to the subject, in 1956. He also invented the Lisp programming language. Joseph Weizenbaum built ELIZA, a chatterbot implementing Rogerian psychotherapy. The birthdate of AI is generally considered to be July 1956 at the Dartmouth Conference, where many of these people met and exchanged ideas.

At the same time, John von Neumann, who had been hired by the RAND Corporation, developed the game theory, which would prove invaluable in the progress of AI research.

1960s-1970s

During the 1960s and 1970s, Joel Moses demonstrated the power of symbolic reasoning for integration problems in the Macsyma program, the first successful knowledge-based program in mathematics. Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators" in 1963, which described one of the first machine learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt. Marvin Minsky and Seymour Papert published Perceptrons, which demonstrated the limits of simple neural nets. Alain Colmerauer developed the Prolog computer language. Ted Shortliffe demonstrated the power of rule-based systems for knowledge representation and inference in medical diagnosis and therapy in what is sometimes called the first expert system. Hans Moravec developed the first computer-controlled vehicle to autonomously negotiate cluttered obstacle courses.

1990s & Turn of the Century:

During the 1990s and 2000s AI has become very influenced by probability theory and statistics. Bayesian networks are the focus of this movement, providing links to more rigorous topics in statistics and engineering such as Markov models and Kalman filters, and bridging the divide between `neat' and `scruffy' approaches. The last few years have also seen a big interest in game theory applied to AI decision making. This new school of AI is sometimes called `machine learning'. After the September 11, 2001 attacks there has been much renewed interest and funding for threat-detection AI systems, including machine vision research and data-mining. However despite the hype, excitment about Bayesian AI is perhaps now fading again as successful Bayesian models have only appeared for tiny statistical tasks (such as finding principal components probabilistically) and appear to be intractable for general perception and decision making.


Artificial Intelligence in Manufacturing
As the manufacturing industry becomes increasingly competitive, sophisticated technology has emerged to improve productivity. Artificial Intelligence in manufacturing can be applied to a variety of systems. It can recognize patterns, plus perform time consuming and mentally challenging tasks. Artificial Intelligence can optimize your production schedule and production runs.

Advantages

• View your best product runs and the corresponding settings.
• Increase efficiency and quality by using optimal settings from past production.
• Artificial Intelligence can optimize your schedule beyond normal human capabilities.
• Increase productivity by eliminating downtime due to unpredictable changes in the schedule.


The Tuppas Difference

At Tuppas our focus is on continuous innovation. We provide your team with the ability to rapidly out innovate your competitors by providing tools that can easily and affordably respond to your changing expectations.

AI in Production Scheduling

Our artificial intelligence software for scheduling is based on genetic scheduling algorhythms which translate your scheduling goals into ordered tasks based on their importance. Tuppas Artificial Intelligence for Scheduling is designed to optimize your schedule based on your requirements. We design the software to recognize various levels of priority based on numerical associations. For example, if you are more concerned with a product or project due date than machine efficiency, but you want the system to optimize both, we would program your system to give higher priority to the due date but still optimize for machine efficiency. Another example of AI software's capability would be when unplanned jobs need to be added to the schedule or an existing job changes priority; the system would immediately reorganize the entire schedule to include the new information while meeting your requirements and priorities.

AI in Closed Loop Production Optimization

Artificial Intelligence software for closed loop production optimization compares your goals to actual production runs. We have designed algorhythms that analyze which of your past runs come closest to meeting your goals for the current production run, then present you with the best process settings for the current job. Our AI software presents a machine setting "recipe" to your staff which they can use to create the best results. This allows your production staff to execute progressively more efficient runs by leveraging information collected from past production runs.


About Tuppas Software

Tuppas has designed a programming framework on top of Microsoft's .Net platform which allows us to offer you the the following advantages over other manufacturing software vendors:

• We design each module you purchase specifically for you.
Your employees wont find confusing fields or functionality you don't need in your software. We configure our modules to be a perfect fit in your situation.

• It's Easy to modify.
Whether we make the modifications or show you how to do it yourself, it's no longer a hassle to change the way your manufacturing software works. Add fields, change report layouts and more, whenever you like.

• It's scalable.
Our software can grow and change as your business needs change. Since it's browser based, it can easily be extended to multiple locations. It also means that updates and changes to a multi-location enterprise's software are quick and easy.


Supported databases and operating systems for Artificial Intelligence





Artificial Intelligence Benefits
•Reduced IT software support requirement
•Reduced hardware and servers
•Intuitive system interfaces
•Reduction in software training
•Cost of future innovation is dramatically decreased
•Ability to respond to new opportunities increases


INTERNATIONAL INNOVATION IN ARTIFICAL INTELLIGENCE

Formation - a system to lay out all British Telecom Yellow Pages directories and to create a new business area for Pindar Set Ltd for responsive marketing support and catalogue layout systems. Winner of an award for innovative applications of AI in 1998.
Expert Provisioner - a system for the RAF Logistics to assist with procurement of spare parts. Saves £30m per annum for the RAF and is now being deployed to the British Army and Navy.

EASE - system deployed throughout Europe to estimate occupational exposure to hazardous substances for health and safety regulations.

Fraud Detection - Case-Based Reasoning has been applied to screen applications for financial products with MCL Software.

Ghostwriter - multilingual support for aero-engine maintenance procedures with British Aerospace and Dassault Aviation (France).

O-Plan and I-X - command, planning and control agents to support non-combatant evacuation operations, US Army military operations in urban terrain, multinational coalition operations, disaster relief, search & rescue, etc.

Optimum-AIV - planning system for assembly, integration and test of Ariane IV payloads for the European Space Agency.

EUMETSAT - specification of the telecommand system for the European Meteorological Spacecraft Control Centre.

International Standards - inputs to the development of standards for process specification, workflow, enterprise modelling, etc.






AI Activities in India

Work in Artificial Intelligence began in India in the early 1980's. The Management Information Systems (MIS) Group at the Indian Institute of Management, Calcutta has been actively involved in Artificial Intelligence (AI) research since the early 1980s. The AI work in India got a significant boost with the UNDP funded Knowledge Based Computing Systems (KBCS) project. This project started in November 1986 with a view to building institutional infrastructure, keeping abreast of the state-of-the-art technology, training scientific manpower and undertaking R&D in certain specific socio-economic areas that are amenable to this technology.

The major centers of the KBCS project were:-Tata Institute of Fundamental Research, MumbaiNational Centre for Software Technology (now Centre for Development of Advanced Computing), MumbaiCentre for Development of Advanced Computing, PuneIndian Statistical Institute, Kolkata Indian Institute of Technology, Chennai Indian Institute of Science, Bangalore Department of Electronics (now Department of Information Technology), Govt of IndiaThe project has been highly successful in spreading research and application of different techniques of Artificial Intelligence to not only most Universities and research institutes in India but also across large sections of India's very successful software industry

Academic Institutions

Indian Institute of Technology, Delhi
At IIT Delhi, three departments - Computer Science and Engineering, Electrical Engineering, and Mathematics - are engaged in Artificial Intelligence related research. The focus areas are Computer vision, Natural language processing, Planning, Agent-based computing, Case-based reasoning, Information retrieval, Neural networks and Knowledge-based systems.

Indian Institute of Technology, Chennai
The Department of Computer Science & Engineering at IIT Chennai has an Artificial Intelligence Laboratory. Research here focuses on Case based reasoning, Data integration, Planning, Speech processing, and Language & image processing with artificial neural networks and hidden Markov models.

Indian Institute of Technology, Kanpur
The work in Artificial Intelligence at IIT Kanpur spreads in to Machine learning, Information extraction, Machine translation, Speech recognition, Computer vision and Robotics.

Indian Institute of Technology, Mumbai

The AI research areas at IIT Mumbai are Natural language processing, Knowledge processing and applications, Robotics, Machine learning, Computer vision and Intelligent control systems.
Indian Institute of Technology, Kharagpur
IIT Kharagpur has research interests in Language technologies, Information retrieval, and personalized tutoring. Work here includes development of computer interfaces for visually challenged and non-literate and Braille for Indian languages.

Indian Institute of Management, Kolkata
IIM Kolkata has research interest in AI search methods, Game playing, combinatorial optimization, soft computing and artificial neural networks, Constraint satisfaction problems, Very large scale integration (VLSI) and (Internet) auctions are the focus areas at IIM Kolkata.

Indian Statistical Institute, Kolkata
At ISI Kolkata, the focus areas of AI research are Computer vision, Image processing, Pattern recognition, and Expert Systems.

Tata Institute of Fundamental Research, Mumbai
Indian language (Hindi) processing, Speech recognition, Speech synthesis, and Speech vocabulary are the main AI research areas at TIFR Mumbai.

National Centre for Software Technology, Mumbai
The Knowledge Based Computer Systems (KBCS) division of NCST (now Centre for Development of Advanced Computing), Mumbai carries out research activities in Artificial Intelligence. The areas of interest are: Natural language processing (including machine translation from English to Hindi, cross lingual information retrieval, and NL interfaces), Planning and scheduling (including transportation, vehicle scheduling and timetabling using heuristic search and perturbation models), Expert systems, Data mining, Neural network model for estimation of power consumption, and Intelligent tutoring systems including personalized instruction. NCST publishes an AI journal Vivek.

International Institute of Information Technology, Hyderabad
The focus areas of AI research at IIIT Hyderabad are Machine translation, Natural interfaces, and Knowledge extraction and management.



Central Electronics Engineering Research Institute (CEERI), Pilani
AI related work at CEERI Pilani includes Fuzzy logic and Neural network control & modeling application in Agri-based products and power electronics, Adaptive neuro fuzzy techniques for intelligent sensor development under MEMS programme, Data mining techniques and applications in environmental monitoring & control, Speech Recognition, and Speech data collection.


Indian Institute of Science, Bangalore
IISc is one of India's premier centers of technical education and academic research. The Department of Computer Science and Automation (CSA) and EE department are actively engaged in AI & intelligent systems activities including Data mining, neural networks, and intelligent information retrieval.


University of Hyderabad
AI research interests at University of Hyderabad Includes Neural networks, Cognitive modeling, Speech technologies, Expert systems, Natural language processing and language engineering, Image processing and pattern recognition, and Temporal and evidential reasoning. University of Hyderabad is acting as the resource center for Telugu language processing also.


The various industries involved in AI related activities in India are follows:

TCS - TRDDC Pune Case based reasoning, rule based reasoning, natural language processing, Information Retrieval, Pattern Discovery, Machine learning, NN

Systemantics Bangalore Robotics -- Mechanics Based

DRDO Several Places in India Robotics -- Defense

CAIR Bangalore Artificial Intelligence and robotics

Hi-Tech Robotics Gurgaon Robotics

Maruti Udyog Noida Embedded Control Systems, Engineering

L&T Mumbai

PARI Robotics Pune Robotics -- Industrial Automation

HP Labs Handwriting recognition, Speech
recognition, and Content management.

3rd Indian International Conference on Artificial Intelligence (IICAI-07).

International Conference on Artificial Intelligence (IICAI-07). IICAI-07 will be held during December 17-19 2007 in Pune (pronounced as PooNay), India. IICAI is a series of high quality technical events in Artificial Intelligence (AI) and is also one of the major AI events in the world.

The primary goal of the conference is to promote research and developmental activities in AI and related fields in India and the rest of the world. Another goal is to promote scientific information interchange between AI researchers, developers, engineers, students, and practitioners working in India and abroad. The conference will be held every two years to make it an ideal platform for people to share views and experiences in AI and related areas.



PCAI: Where intelligent technology meets (www.pcai.com)

PC AI Online provides the latest information on intelligent applications and artificial intelligence. Online issues include tutorials, new product announcements, buyer's guides, as well as examples of successful uses of these technologies to solve real world problems. Readers include AI novices, experienced researchers and every level of expertise in between.
Topics include: Knowledge Based Systems, AI Languages, Neural Networks, Machine Learning, Genetic Algorithms, Evolutionary Software, Expert Systems, Fuzzy Logic, Data Mining, Intelligent Agents, Business Rules, Case-Based Reasoning, Common Sense, Data Visualization, Inferencing, Forecasting, Java, Pattern Matching, Speech, Rule-Based Systems, Text Mining, Vision, Robotics and much more.


Data mining

Data mining (DM), also called Knowledge-Discovery in Databases (KDD) or Knowledge-Discovery and Data Mining, is the process of automatically searching large volumes of data for patterns using tools such as classification, association rule mining, clustering, etc.. Data mining is a complex topic and has links with multiple core fields such as computer science and adds value to rich seminal computational techniques from statistics, information retrieval, machine learning and pattern recognition.


Fuzzy Logic

Fuzzy logic is a superset of conventional (Boolean) logic that has been extended to handle the uncertainty in data. It was introduced by Dr. Lotfi Zadeh of UC/Berkeley in the 1960's as a means to model the uncertainty of natural language. Fuzzy logic is useful to processes like manufacturing because of its ability to handle situations that the traditional true/false logic can't adequately deal with. It lets a process specialist describe, in everyday language, how to control actions or make decisions without having to describe the complex behavior.

Genetic algorithm

A genetic algorithm (or short GA) is a search technique used in computing to find true or approximate solutions to optimization and search problems. Genetic algorithms are categorized as global search heuristics. Genetic algorithms are a particular class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover (also called recombination).

Intelligent system

The intelligent system is sometimes used for incomplete intelligent systems, for instance for an intelligent house or an expert system. Here we talk about complete intelligent systems. Such a system has senses to gather information from its environment. It can act and has a memory of the results of its actions. It has an objective and by inspecting its memory it can learn from experience, how to better reach its objectives

Ontology

In both computer science and information science, an ontology is a data model that represents a domain and is used to reason about the objects in that domain and the relations between them.
Ontologies are used in artificial intelligence, the semantic web, software engineering and information architecture as a form of knowledge representation about the world or some part of it. Ontologies generally describe:

1)Individuals: the basic or "ground level" objects
2)Classes: sets, collections, or types of objects
3)Attributes: properties, features, characteristics, or parameters that objects can have and share
4)Relations: ways that objects can be related to one another

Blackboard Technology

A blackboard architecture has three major components:
* A hierarchically organized global memory or database called a blackboard which saves the solutions generated by the knowledge sources;
* A collection of knowledge sources that generate independent solutions on the blackboard using expert systems, neural networks, and numerical analysis;
* A separate control module or scheduler which reviews the knowledge sources and selects the most appropriate one.
The advantages of a blackboard include separation of knowledge into independent modules with each module being free to use the appropriate technology to arrive at the best solution with the most efficiency. An additional advantage of the independent modules is the potential for using separate computing units for the independent knowledge sources, thus allowing distributed computing. This approach allows for rapid prototyping of complex problems and simplifies long-term system maintenance

Case-based reasoning

Case-based reasoning (CBR), broadly construed, is the process of solving new problems based on the solutions of similar past problems. An auto mechanic who fixes an engine by recalling another car that exhibited similar symptoms is using case-based reasoning. A lawyer who advocates a particular outcome in a trial based on legal precedents or a judge who creates case law is using case-based reasoning. So, too, an engineer copying working elements of nature (practicing biomimicry), is treating nature as a database of solutions to problems. Case-based reasoning is a prominent kind of analogy making.
It has been argued that case-based reasoning is not only a powerful method for computer reasoning, but also a pervasive behavior in everyday human problem solving.


INTELLIGENT SYSTEMS DIVISION (NASA)

Major research activities
Robust Software Engineering
Increased software quality, reliability, and productivity through research done in the context of NASA applications

Autonomous Systems and Robotics
Development of technologies required for systems that can adapt their behavior to complex, rapidly changing environments

Collaborative and Assistant Systems
Information technologies and collaboration tools that facilitate the specialized work of distributed teams in NASA mission settings

Discovery and Systems Health
Tools and methods for systems health management; large-scale science and aeronautical data analysis and data mining


American Association for Artificial Intelligence

The American Association for Artificial Intelligence or AAAI is a North American organization dedicated to advancing understanding of artificial intelligence (AI). The AAAI seeks to expand both the technical, scientific understanding of AI as well as the public understanding of AI as a science.
The organization, founded in 1979, has in excess of 6,000 members worldwide. In its early history, the organization was presided over by notable figures in computer science such as Allen Newell, Edward Feigenbaum, Marvin Minsky and John McCarthy. The current president is Alan Mackworth

. The AAAI sponsors many conferences and symposia each year as well as providing support to 14 journals in the field of artificial intelligence. The AAAI also established the AAAI Press in association with the MIT Press in 1989 to produce books of relevance to artificial intelligence research. Additionally, the AAAI produces a quarterly publication, AI Magazine which is written in such a way that it allows researchers to broaden the scope of their knowledge beyond their sub-fields.

Every other year, AAAI works with other AI organizations worldwide to put together the International Joint Conference on Artificial Intelligence (IJCAI).

Artifical intelligence research group at Havard

The Artifical Intelligence Research Group at Harvard serves as the centre of all AI-related research occurring at Harvard through a series of colloquims, forums and presentations. Leading members of Harvard's Faculty, in conjunction with researchers, graduate students and undergraduates meet weekly, and through various activities, seek to stimulate and promote AI research at Harvard. A brief summary of the type and areas of research focussed upon by members of AIRG is provided below.

Areas of Research:
Natural Language Processing Human-Computer Interface
Computational Linguistics Automated Graphic Design
Statistical Language Processing Collaborative Interfaces
Discourse


Reasoning under Uncertainty
Probabilistic Reasoning for Complex Systems
Learning Rich Probabilistic Models
Effective Algorithms for Game-Theoretic Problems
Multiagent Systems
Bio-inspired multiagent systems and models in biology
Collective Robotics
Computational mechanism design
Automated negotiation
Electronic auctions
Collaborative problem solving

Comments

Popular posts from this blog

WHAT IS A FIRM?

എങ്ങനെയാണ് ഒരു വിമാനം പറക്കുന്നത്?