AI: Paradigm Shifts - PowerPoint PPT Presentation

Loading...

PPT – AI: Paradigm Shifts PowerPoint presentation | free to download - id: 3d52d8-YjMyO



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

AI: Paradigm Shifts

Description:

AI research trends continue to shift Moving AI from a stand-alone component, to a component within other software systems consider the original goal was to build ... – PowerPoint PPT presentation

Number of Views:40
Avg rating:3.0/5.0
Slides: 41
Provided by: nkuEdufo6
Learn more at: http://www.nku.edu
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: AI: Paradigm Shifts


1
AI Paradigm Shifts
  • AI research trends continue to shift
  • Moving AI from a stand-alone component, to a
    component within other software systems
  • consider the original goal was to build
    intelligence
  • later, the goal became problem solving systems
  • Now the goal is autonomous (agents) or
    semi-autonomous (robots)
  • software systems or systems that work with humans
    (data mining, decision support tools)
  • Machine learning has similarly shifted
  • Originally the concept was vague with no ideas of
    how to approach it
  • early symbolic approaches dealt with acquiring
    knowledge or building on top of already present
    knowledge
  • neural networks focused on training how to solve
    a given task
  • Today, we often look at learning as improving
    performance through training

2
Other Paradigm Shifts
  • Early AI was almost solely symbolic based
  • Early neural network research of the 1960s made
    no impact
  • In the 1980s, there was a shift away from
    symbolic to connectionism
  • But that shift was somewhat short-lived as neural
    network limitations demonstrated
  • Today, we see all kinds of approaches
  • Symbolic knowledge-based
  • possibly using fuzzy logic
  • Symbolic ontologies
  • Symbolic probabilistic through networks
    (Bayesian, HMM)
  • Neural network
  • Genetic algorithms

3
AI Approaches in the Future
  • Obviously, we cant predict now what approaches
    will be invented in the next 5-10 years or how
    these new approaches will impact or replace
    current approaches
  • However, the following approaches are finding
    uses today and so should continue to be used
  • data mining on structured data
  • machine learning approaches Bayesian and neural
    network, support vector machines
  • case based reasoning for planning, model-based
    reasoning systems
  • rules will continue to lie at the heart of most
    approaches (except neural networks)
  • mathematical modeling of various types will
    continue, particularly in vision and other
    perceptual areas
  • Interconnected (networks) software agents
  • AI as part of productivity software/tools

4
AI Research For the Short Term
  • Reinforcement learning
  • Applied to robotics
  • Semantic web
  • Semi-annotated web pages
  • Development of more intelligent agents
  • Speech recognition
  • Improvement in areas of accuracy, larger
    vocabulary, and speed (reducing amount of search)
  • Natural language understanding
  • Tying symbolic rule-based approaches with
    probabilistic approaches, especially for semantic
    understanding, discourse and pragmatic analysis

5
Continued
  • Social networks
  • Modeling and reasoning about the dynamics of
    social networks and communities including email
    analysis and web site analysis
  • Multi-agent coordination
  • How will multiple agents communicate, plan and
    reason together to solve problems such as
    disaster recovery and system monitoring (e.g.,
    life support on a space station, power plant
    operations)
  • Bioinformatics algorithms
  • While many bioinformatics algorithms do not use
    AI, there is always room for more robust search
    algorithms

6
Some Predictions?
  • Next 5-10 years
  • Work continues on semantic web,
    robotics/autonomous vehicles, NLP, SR, Vision
  • Within 10 years
  • part of the web is annotated for intelligent
    agent usage
  • modest intelligent agents are added to a lot of
    applications software
  • robotic caretakers reach fruition (but are too
    expensive for most)
  • SR reaches a sufficient level so that continuous
    speech in specific domains is solved
  • NLP in specific domains is solved
  • Reliable autonomous vehicles used in specialized
    cases (e.g., military)

7
And Beyond
  • Within 20 years
  • robotic healthcare made regularly available
  • vision problem largely solved
  • autonomous vehicles available
  • intelligent agents part of most software
  • cognitive prosthetics
  • semantic web makes up a majority of web pages
  • computers regularly pass the Turing Test
  • Within 50 years
  • nano-technology combines with agent technology,
    people have intelligent machines running through
    their bodies!
  • humans are augmented with computer memory and
    processors
  • computers are inventing/creating useful artifacts
    and making decisions
  • Within 100 years (?)
  • true (strong) AI

8
Other Predictions
  • Want to place a bet? These bets are available
    from www.longbets.org/bets
  • By 2020, wearable devices will be available that
    will use speech recognition to monitor and index
    conversations and can be used as supplemental
    memories by Greg Webster (??)
  • By 2025, at least half of US citizens will have
    some form of technology embedded in their bodies
    for ID/tracking Douglas Hewes (CEO Business
    Technologies)
  • By 2029, no computer will have passed the Turing
    test by Ray Kurzweil (a well known entrepreneur
    and technologist)
  • By 2030, commercial passenger planes will fly
    pilotless by Eric Schmidt (CEO Google)
  • By 2050, no machine intelligence will be
    self-aware by Nova Spivack (CEO of Lucid
    Ventures)
  • By 2108, a sentient AI will exist as a
    corporation providing services as well as making
    its own financial and strategic decisions by
    Jane Walter (??)

9
Wearable AI
  • Wearable computer hardware is becoming more
    prevalent in society
  • we want to enhance the hardware with software
    that can supply a variety of AI-like services
  • The approach is called humanistic intelligence
    (HI)
  • HI includes the human in the processing such that
    the human is not the instigator of the process
    but the beneficiary of the results of the process

10
HI Embodies 3 Operational Modes
  • Constancy the HI device is always operational
    (no sleep mode) with information always being
    projected (unlike say a wrist watch where you
    have to look at it)
  • Augmentation the HI augments the humans
    performance by doing tasks by itself and
    presenting the results to the human
  • Mediation the HI encapsulates the human, that
    is, the human becomes part of the apparatus for
    instance by wearing special purpose glasses or
    headphones (but the HI does not enclose the
    human)
  • These systems should be unmonopolizing,
    unrestrictive, observable, controllable,
    attentive, communicative

11
HI Applications
  • Filtering out unwanted information and alerting
  • specialized glasses that hide advertisements or
    replace the content with meaningful information
    (e.g., billboards replaced with news)
  • blocking unwanted sounds such as loud noises with
    white noise
  • alerting a driver of an approaching siren
  • providing GPS directions on your glasses
  • Recording perceptions
  • if we can record a persons perceptions, we might
    be able to play them back for other people
    record a live performance
  • other examples include recording people
    performing an activity so that it can be repeated
    (by others)
  • record hand motions while the user plays piano
  • record foot motions while the user dances to
    capture choreography

12
Continued
  • Military applications
  • aiming missiles or making menu selections in an
    airplane so that the pilot doesnt have to move
    his hands from the controls some of this
    technology already exists
  • reconnaissance by tracking soldiers in the field,
    seeing what they are seeing
  • Minimizing distractions
  • using on-board computing to determine what a
    distraction might be to you and to prevent it
    from arising or blocking it out
  • Helping the disabled
  • HI hearing aids, HI glasses for filtering,
    internal HI for medication delivery, reminding
    and monitoring systems for the elderly

13
Beyond AI Wearables
  • As the figure below shows, these devices may be
    more intimately wound with the human body
  • We are currently attaching ID/GPS mechanisms to
    children and animals
  • Machine-based tattoos are currently being
    researched
  • What about underneath the skin?
  • Nano-technology
  • Hardware inside the human body (artificial
    hearts, prosthetic device interfaces, etc)

14
AI in Space/NASA
  • Planning/scheduling
  • Manned mission planning, conflict resolution for
    multiple missions
  • Multi-agent planning, distributed/shared
    scheduling, adaptive planning
  • Rover path planning
  • Telescope scheduling for observations
  • Deliberation vs. reactive control/planning
  • Plan recovery (failure handling)
  • Life support monitoring and control for safety
  • Simulation of life support systems
  • On-board diagnosis and repair
  • Science
  • Weather forecasting and warning, disaster
    assessment
  • Feature detection from autonomous probes
  • Other forms of visual recognition and discovery

15
Smart Environments
  • Sometimes referred to as smart rooms
  • Components collection of computer(s), sensors,
    networks, AI and other software, actuators (or
    robots) to control devices
  • Goal the environment can modify itself based on
    user preferences or goals and safety concerns
  • smart building might monitor for break-ins, fire,
    flood, alert people to problems, control traffic
    (e.g. elevators)
  • smart house might alter A/C, adjust lighting,
    volume, perform household chores
    (starting/stopping the oven, turn on the
    dishwasher), determine when (or if) to run the
    sprinkler system for the lawn
  • smart restaurant might seat people automatically,
    have robot waiters, automatically order food
    stock as items are getting low (but not actually
    cook anything!)

16
Smart Windows
  • One of the more imminent forms of smart
    environments is the smart window
  • To help control indoor environments as an
    energy-saving device
  • The window contains several sheets of optical
    film, each of which is controlled by a roller
    that can roll the film up or down
  • there are six microcontrollers
  • presumably the system works by fuzzy logic
    although I could not find such details on how the
    controllers made decisions
  • An optical sensor allows the window to identify
    the current situation (too much light, too much
    heat, not enough heat) and respond by creating
    the needed level of translucency by sliding films
    up or down

17
Smart Room
18
Automated Highways
  • Features
  • Provide guidance information for cooperative
    (autonomous) vehicles
  • Monitor and detect non-cooperative vehicles and
    obstacles
  • Plan optimum traffic flow
  • Architecture
  • Network of short-range hi-resolution radar
    sensors on elevated poles
  • Additional equipment in vehicles (transponders
    for instance for location and identification)
  • Sensors on the road for road conditions and on
    the vehicles for traction information
  • Sensors for other obstacles (e.g., animals)
  • Computer network
  • Roadway blocked off from sidewalk and pedestrian
    traffic

19
Evolution of AVs/Highways
20
Smart Highway
21
Smart City Block
22
Creating Human-level Intelligence
  • This was our original goal
  • Is it still the goal of AI?
  • Should this be the primary goal of AI?
  • What approaches are taking us in that direction?
  • Cyc?
  • Cog and other efforts from Brooks?
  • Semantic web and intelligent agents?
  • What do we need to improve this pursuit?
  • Study the brain? Study the mind?
  • Study symbolic approaches? Subsymbolic
    approaches?
  • Machine learning?
  • In spite of such pursuits, most AI is looking at
    smaller scale problems and solutions
  • And in many cases, we now are willing to embrace
    helper programs that work with human users

23
Social Concerns Unemployment
  • According to economics experts, computer
    automation has created as many jobs as it has
    replaced
  • Automation has shifted the job skills from blue
    collar to white collar, thus many blue collar
    jobs have been eliminated (assembly line
    personnel, letter sorters, etc)
  • What about AI?
  • Does AI create as many jobs as it makes obsolete?
  • probably not, AI certainly requires programmers,
    knowledge engineers, etc, but once the system is
    created, there is no new job creation
  • Just what types of jobs might become obsolete
    because of AI?
  • secretarial positions because of intelligent
    agents?
  • experts (e.g., doctors, lawyers) because of
    expert systems?
  • teachers because of tutorial systems?
  • management because of decision support systems?
  • security (including police), armed forces,
    intelligence community?

24
Social Concerns Liability
  • Who is to blame when an AI system goes wrong?
  • Imagine these scenarios
  • autonomous vehicle causes multi-car pile-up on
    the highway
  • Japanese subway car does not stop correctly
    causing injuries
  • expert medical system offers wrong diagnosis
  • machine translation program incorrectly
    translating statements between diplomats leading
    to conflict or sanctions
  • We cannot place blame on the AI system itself
  • According to law, liability in the case of an AI
    system can be placed on all involved
  • the user(s) for not using it correctly
  • the programmers/knowledge engineers
  • the people who supplied the knowledge (experts,
    data analysts, etc)
  • management and researchers involved
  • AI systems will probably require more thorough
    testing than normal software systems
  • at what point in the software process should we
    begin to trust the AI system?

25
Case Study Therac-25
  • Medical accelerator system to create high energy
    electron beams
  • used to destroy tumors, can convert the beam to
    x-ray photons for radiation treatments
  • Therac-25 is both hardware and software
  • earlier versions, Therac-6 and Therac-20, were
    primarily the hardware, with minimal software
    support
  • Therac-6 and -20 were produced by two companies,
    but Therac-25 was produced only be one of the two
    companies (AECL) , borrowing software routines
    from Therac-6 (and unknown to the quality
    assurance manager, from Therac-20)
  • 11 units sold (5 in US, 6 in Canada) in the early
    to mid 80s, during this time, 6 people were
    injured (several died) from radiation overdoses

26
The 6 Reported Accidents
  • 1985 woman undergoing lumpectomy receives
    15,000-20,000 rads eventually she loses her
    breast due to over exposure to radiation, also
    loses ability to use arm and shoulder
  • treatment printout facility of Therac-25 was not
    operating during this session and therefore AECL
    cannot recreate the accident
  • 1985 patient treated for carcinoma of cervix
    user interface error causes overexposure of
    1317,000 rads, patient dies in 4 months of
    extremely virulent cancer, had she survived total
    hip replacement surgery would have been needed
  • 1985 treatment for erythema on right hip
    results in burning on hip, patient still alive
    with minor disability and scarring

27
Continued
  • 1986 patient receives overdoes caused by
    software error, 16,500-25,000 rads, dies within 5
    months
  • 1986 same facility error, patient receives
    25,000 rads dies within 3 weeks
  • 1987 AECL had fixed all of the previously
    problems, new error of hardware coupled with user
    interface and operator error results in a
    patient, who was supposed to get 86 rads being
    given 8-10,000 rads, patient dies 3 months later
  • note Therac-20 had hardware problems which
    would have resulted in the same errors from
    patients 4 and 5 above, but because the safety
    interlocks were in hardware, the error never
    arose during treatment to harm a patient

28
Causes of Therac-25 Accidents
  • Therac-20 used hardware interlocks for
    controlling hardware settings and ensuring safe
    settings before beam was emitted
  • User interface was buggy
  • Instruction manual omitted malfunction code
    descriptions so that users would not know why a
    particular shut down had occurred
  • Hardware/software mismatch led to errors with
    turntable alignment
  • Software testing produced a software fault tree
    which seemed to have made up likelihoods for
    given errors (there was no justification for the
    values given)

29
Continued
  • In addition, the company was slow to respond to
    injuries, and often reported we cannot recreate
    the error, they also failed to report injuries
    to other users until forced to by the FDA
  • Investigators found that the company had less
    than acceptable software engineering practices
  • Lack of useful user feedback from the Therac-25
    system when it would shut down, failure reporting
    mechanism off-line during one of the accidents

30
Safety Needs in Critical Systems
  • It is becoming more and more important to apply
    proper software engineering methodologies to AI
    to ensure correctness
  • Especially true in critical systems (Therac-25,
    International Space Station), real-time systems
    (autonomous vehicles, subway system)
  • Some suggestions
  • Increase the usage of formal specification
    languages (e.g., Z, VDM, Larch)
  • Add hazard analysis to requirements analysis
  • Formal verification should be coupled with formal
    specification
  • statistical testing, code/document inspection,
    automated theorem provers
  • Develop techniques for software development that
    encapsulate safety
  • formal specifications for component retrieval
    when using previously written classes to limit
    the search for useful/usable components
  • reasoning on externally visible system behavior,
    reasoning about system failures (this is
    currently being researched to be applied to life
    support systems on the International Space
    Station)

31
Social Concerns Explanation
  • An early complaint of AI systems was their
    inability to explain their conclusions
  • Symbolic approaches (including fuzzy logic, rule
    based systems, case based reasoning, and others)
    permit the generation of explanations
  • depending on the approach, the explanation might
    be easy or difficult to generate
  • chains of logic are easy to capture and display
  • Neural network approaches have no capacity to
    explain
  • in fact, we have no idea what internal nodes
    represent
  • Bayesian /HMM approaches are limited to
  • probabilistic results (show probabilities to
    justify answer)
  • paths through an HMM

32
Continued
  • As AI researchers have moved on to more
    mathematical approaches, they have lost the
    ability (or given up on the ability) to have the
    AI system explain itself
  • How important will it be for our AI system to
    explain itself?
  • is it important for speech recognition?
  • is it important for an intelligent agent?
  • here, the answer is probably yes, if the agent is
    performing a task for a person, the person may
    want to ask why did you choose that?
  • is it important for a diagnostic system?
  • extremely important
  • is it important for an autonomous vehicle?
  • possibly only for debugging purposes

33
Social Concerns AI and Warfare
  • What are the ethics of fighting a war without
    risking our lives?
  • Consider that we can bomb from a distance without
    risk to troops since this lessens our risk,
    does it somehow increase our decision to go to
    war?
  • How would AI impact warfare?
  • mobile robots instead of troops on the
    battlefield
  • predator drone aircraft for surveillance and
    bombing
  • smart weapons
  • better intelligence gathering
  • While these applications of AI give us an
    advantage, might they also influence our decision
    to go to war more easily?
  • On the other hand, can we trust our fighting to
    AI systems?
  • Could they kill innocent bystanders?
  • Should we trust an AI systems intelligence
    report?

34
Social Concern Security
  • In a similar vein, we are attempting to use AI
    more and more in the intelligence community
  • Assist with surveillance
  • Assist with data interpretation
  • Assist with planning
  • Will the public back AI-enhanced security
    approaches?
  • What happens if we come to rely on such
    approaches?
  • Are they robust enough?
  • Given the sheer amount of data that we must
    process for intelligence, AI approaches makes
    fiscal sense
  • How do we ensure that we do not have gaps in what
    such systems analyze
  • How do we ensure accuracy of AI-based
    conclusions?
  • In some ways, we might think of AI in security as
    a critical system, and AI in disaster planning as
    a real time system

35
Social Concerns Privacy
  • This is primarily a result of data mining
  • We know there is a lot of data out there about us
    as individuals
  • what is the threat of data mining to our privacy?
  • will companies misuse the personal information
    that they might acquire?
  • We might extend our concern to include
    surveillance why should AI be limited to
    surveillance on (hypothetical) enemies?
  • Speech recognition might be used to transcribe
    all telephone conversations
  • NLU might be used to intercept all emails and
    determine whether the content of a message is
    worth investigating
  • We are also seeing greater security mechanisms
    implemented at areas of national interests
    (airports, train stations, malls, sports arenas,
    monuments, etc) cameras for instance
  • previously it was thought that people would not
    be hired to watch everyone, but computers could

36
What If Strong AI Becomes a Reality?
  • Machines to do our work for us leaves us with
  • more leisure time
  • the ability to focus on educational pursuits,
    research, art
  • computers could teach our young (is this good or
    bad?)
  • computers could be in charge of transportation
    thus reducing accidents, and possibly even saving
    us on fuel
  • computers may even be able to discover and create
    for us
  • cures to diseases, development of new power
    sources, better computers
  • On the negative side, this could also lead us
    toward
  • debauchery (with leisure time we might degrade to
    decadence)
  • consider ancient Romans had plenty of free time
    because of slavery
  • unemployment which itself could lead to economic
    disaster
  • if computers can manufacture for us anything we
    want, this can also lead to economic problems
  • We might become complacent and lazy and therefore
    not continue to do research or development

37
AI The Moral Dilemma
  • Researchers (scientists) have often faced the
    ethical dilemmas inherent with the product of
    their work
  • Assembly line
  • Positive outcomes increased production and led
    to economic boons
  • Negative outcomes increased unemployment,
    dehumanized many processes, and led to increased
    pollution
  • Atomic research
  • Positive outcomes ended world war II and
    provided nuclear power,
  • Negative outcomes led to the cold war and the
    constant threat of nuclear war, creates nuclear
    waste, and now we worry about WMDs
  • Many researchers refused to go along with the US
    governments quest to research atomic power once
    they realized that the government wanted it for
    atomic bombs
  • They feared what might come of using the bombs
  • But did they have the foresight to see what other
    problems would arise (e.g., nuclear waste) or the
    side effect benefits (eventually, the arms race
    caused the collapse of the Soviet Union because
    of expense)
  • What side effects might AI surprise us with?

38
Long-term Technological Advances
  • If we extrapolate prior growth of technology, we
    might anticipate
  • enormous bandwidth (terabit per second),
    secondary storage (petabyte) and memory
    capacities (terabyte) by 2030
  • in essence, we could record all of our
    experiences electronically for our entirely
    lifetime and store them on computer, we can also
    download any experience across a network quickly
  • Where might this lead us?
  • Teleportation combining network capabilities,
    virtual reality and AI
  • Time travel being able to record our
    experiences, thoughts and personalities, in a
    form of agent representative, so that future
    generations can communicate with us combining
    machine learning, agents, NLU
  • Immortality the next step is to then upload
    these representatives into robotic bodies, while
    these will not be us, our personalities can live
    on, virtually forever

39
Ethical Stance of Creating True AI
  • Today we use computers as tools
  • software is just part of the tool
  • AI is software
  • will we use it as a tool?
  • Does this make us slave masters?
  • ethically, should we create slaves?
  • if, at some point, we create strong AI, do we set
    it free?
  • what rights might an AI have?
  • would you permit your computer to go on strike?
  • would you care if your computer collects data on
    you and trades it for software or data from
    another computer?
  • can we ask our AI programs to create better AI
    programs and thus replace themselves with better
    versions?
  • What are the ethics of copying AI?
  • we will presumably be able to mass produce the AI
    software and distribute it, which amounts
    essentially to cloning
  • humans are mostly against human cloning, what
    about machine cloning?

40
End of the World Scenario?
  • When most people think of AI, they think of
  • AI run amok
  • Terminator, Matrix, etc (anyone remember
    Colossus The Forbin Project?)
  • Would an AI system with a will of its own
    (whether this is self-awareness or just
    goal-oriented) want to take over mankind or kill
    us all?
  • how plausible are these scenarios?
  • It might be equally likely that an AI that has a
    will of its own would just refuse to work for us
  • might AI decide that our problems/questions are
    not worthy of its time?
  • might AI decide to work on its own problems?
  • Can we control AI to avoid these problems?
  • Asimovs 3 laws of robotics are fiction, can we
    make them reality?
  • How do we motivate an AI? How do we reward it?
About PowerShow.com