Exhibitor News

Gartner: 2018 AI technology maturity curve

一、 Techniques for disappearing from the curve
In 2018, the following technologies have disappeared from the curve:
➧ 1, virtual customer assistant ➧ 2, cognitive expert advisers ➧ 3, level 3 and level 4 autopilot ➧ 4, deep reinforcement learning ➧ 5, intelligent application ➧ 6, information technology, artificial intelligence operation platform

二、 Key technologies for the five stages of the 2018 curve
(I) the rising stage
1. Artificial intelligence management
Use predictive models and algorithms to guide the application and use of ai, optimize the allocation of decision rights, and ensure the organization's accountability for risks and control of the investment decision-making process. Whatever the ai, the data source needs to be authentic. In order to avoid one-sided information, it is required to gather new, different, or even contradictory data in combination with the data you have already used to minimize the risk of bias brought about by artificial intelligence.
2. General artificial intelligence
General purpose ai is also known as strong ai. Current artificial intelligence seems to have the same learning, reasoning and adaptability as humans, but they lack common sense, intelligence and a wide range of means of self-maintenance and replication. Real progress in ai has been limited to weak ai. Today's ai technologies cannot be shown to have the capacity to match human intelligence (lack of consensus is itself a problem for testing such intelligence). At some point, it is possible to build a machine that is close to human cognition, but we will probably have to wait decades to complete the necessary research and engineering.
Cutting-edge ai technologies are driving what is now considered "" amazing innovations," "including deep learning tools and related natural language processing capabilities, that are doing what we previously thought technology could not. However, they are usually only emerging research tools from the laboratory, and over time, we do not have a complete understanding of engineering principles, but instead understand their limitations and tailor feasible development guidance strategies. As this curiosity wanes, people start to get bored.
Special-purpose ai will have a hugely disruptive impact on business and personal lives. But before a major technological breakthrough, any vendor's claim that their product has universal artificial intelligence should be ignored, often as an illusion created by programmers. Universal artificial intelligence is unlikely to emerge in the next decade. When it eventually emerges, it is likely the result of a combination of many special-purpose ai technologies.
3. Artificial intelligence development kit
Artificial intelligence (AI) development kit refers to the application and software development kit (SDK) of the abstract data platform, framework and analysis library, which can deliver the artificial intelligence applications enabled by software engineers. They cover: cloud-based ai services; Toolkits for virtual assistants (such as apple Siri, Amazon Alexa and Google assistant), device development kits; And the AI service SDK. Software engineers use them and integrate AI into new or existing applications.
For the past 18 to 24 months, vendors have been actively providing developer - oriented AI toolkits and SDK. Representative products include: Cloud-based AI cloud services platforms (e.g., Google Automl, AwsSagaker and Azure ML studio), toolkits for virtual assistants (e.g., Amazon Alexa skill kit, Apple Sirikit, baidu Dueros open platform, Google dialog stream and Cortana device Sdk) device development toolkits (e.g., Microsoft vision AI), Sdk (e.g., Apple's coreml and Google ML toolkits).
In all categories, vendor offerings require different deployment considerations and have different feature coverage ranges, but we expect that cloud-based AIaas platforms will reduce the complexity of data science and thus be more conducive to developer adoption than native PaaS platforms. However, there are significant differences between suppliers in data preparation, feature selection, model selection and training, hyper-parameter optimization and model deployment.
AI Developer kits support such as image recognition (including face and landmark), text analysis, and image markup. Developers can also deploy custom models at model runtime and optionally update models in cloud services. The device development kit brings custom hardware devices together with the API and SDK to encourage platform developers to adopt. As platform support is incorporated into broader market products, direct platform vendor kit products will be reduced.
The global demand for artificial intelligence is huge, growing far faster than experienced data scientists can do alone. As data products continue to mature, support for edge or device-centric AI models can be achieved through the extension of lightweight running frameworks. The efforts of software engineers and data scientists have increased the attractiveness and stickiness of users to broader, vendor-based cloud platform products, including platform-as-a-service (PaaS).
4.knowledge map
The knowledge map is encoded by information knowledge as data in a network diagram composed of nodes and linked edges, rather than a table composed of rows and columns. Professional vendors are offering graphics-based products to new markets, and well-known vendors are offering the technology on their platforms and products.
Using natural language processing (NLP) and related text analysis techniques, knowledge maps are well suited to store data extracted from unstructured resource analysis. They can also store structured data, including metadata that implicitly provides structure and content, and encode information that supports the processing of various use cases.
Application leaders should use knowledge maps to connect different concepts and enrich their data with missing information. The dynamic relationships generated by digital assets, data sources, and process interactions can be automatically discovered and utilized through chart analysis.
5.neuromorphic hardware
Neuromorphic hardware includes semiconductor devices conceptually inspired by neurobiological architecture. Neuromorphic processors adopt a non-von -Neumann architecture and implement execution models that are distinct from traditional processors. They are characterized by simple processing of elements but high interconnectivity.
The neuromorphic system is at a very early prototype stage. IBM has provided Lawrence livermore national laboratory with a True North based system. Neural chip's peak neuron-adaptive processor technology and HP enterprise's lab products are some of the early products, and Intel's "" Loihi" "chip addresses the broader ai workload with higher levels of connectivity. Qualcomm was an early representative of neuromorphic processors.
The main obstacles to the deployment of neuromorphic hardware are: GPU is easier to access and program than neuromorphic chips; Neuromorphic hardware programming requires new tools and training methods. The complexity of interconnection challenges semiconductor manufacturers to create viable neuromorphic devices. Currently, neuromorphic hardware is not on the mainstream path of deep neural networks (DNNs), but that could change as programming technology breaks through.
Neuromorphic computing architectures can provide extreme performance for deep neural networks because they work at very low power and can be trained faster than gpu-based DNNs systems deployed today. In addition, the neuromorphic architecture can provide support for graphical analysis. Most of today's neuromorphic architectures are not adopted by the mainstream. However, these architectures will mature in the next five years and will incorporate new opportunities.
The device may also perform low-level DNN at the edge, reducing bandwidth and central processing limitations. We are in an extremely rapid evolutionary cycle, supported by new hardware designs, practical DNN algorithms, and big data for systems used for training. Neuromorphic devices have the potential to push DNN's antennae further toward the edge of the network.

6.Relevant consultation and system services of artificial intelligence (AI)
This is artificial intelligence automation services a notable direction, to help customers design cases, design, business or IT processes, selection optimization technology, data management, build, deployment and training model solutions, and reduce the risk assessment, and adjustment of talent combination successfully formed a smart solution, involve one or more of the advanced technologies, such as machine learning, natural language processing and deep learning. Such services can be applied to prediction, such as providing insight, detecting anomalies, providing personalization, and predicting possible events through the use of learning systems for data mining and pattern recognition; It can also be applied in intelligent search, for structured and unstructured data, to extract relevant key terms to be paid attention to in texts such as contracts, reduce the amount of text to be read, and enable employees to focus their time on relevant terms.
7.people around crowdsourcing
Human algorithm based automation to solve problems or execute tasks, in fact, human and intelligence are complementary, because human data input has improved the data management solution, thus further promoting artificial intelligence. The first is to achieve the basic requirements of data in scale; The second is to aggregate group contributions into a meaningful result. Google, Facebook, amazon, Microsoft, IBM, Ebay, baidu and many other companies often adopt this approach. In the past year, adoption has been greatly accelerated, mainly for the quality of data labelling and training data for machine learning needs.
Despite the market potential, crowdsourcing still faces many obstacles, including a lack of awareness of its benefits and concerns about quality and safety. With the maturity of the whole artificial intelligence market, the application grows rapidly. In terms of improving the accuracy of machine learning models, human-cycle-crowdsourcing is a viable (and possibly most reliable) solution. Companies engaged in artificial intelligence and machine learning should use artificial circular crowdsourcing as an enabler of ai solutions. This approach generates more mobile costs and broader problem solving, model training, classification and verification capabilities than internal or traditional outsourcing capabilities.
When the machine learning algorithm reaches its precision limit, humans can further improve the output (such as content adjustment, detail detection verification in text, or information retrieval and search results verification).
Crowdsourcing is indispensable. This approach will greatly benefit the analysis team in applying human intelligence to unstructured text, images, audio and video data for artificial intelligence, machine learning and information quality, and for those looking for a solution or rare skills, such as data science. These tasks can include adjusting, classifying, data collection, product classification, refining product description, text translation, creating real estate photos and audio transcription, etc.

8. Natural language generation
Natural language generation (NLG) automatically generates natural language descriptions of data insight. In the context of analysis, the narrative is dynamic, with users interacting with the data to interpret key findings or the meaning of the chart. NLG combines natural language processing with machine learning and artificial intelligence to dynamically identify the most relevant meanings in data (trends, relationships, correlations).
Text analysis focuses on drawing analysis conclusions from text data, while NLG synthesizes text content by combining analysis output with dynamically selected descriptions. While NLG is still in the initial adoption phase, it is effectively reducing the time and cost of repeated analysis, such as business and regulatory reports, financial services sector earnings reports, government sector benefit reports and weather forecasts, and personalized information for advertising sectors.
9. Chatbot
A chatbot is a separate session interface that USES applications, messaging platforms, social networks, or chat solutions for conversations. Chatbots vary in sophistication, ranging from simple, decision tree-based marketing strategies to building feature-rich platforms as support. Chatbots can be text-based, voice-based, or a combination of the two.
Chatbots in social media, kiosks, human resources or the commercial sector, such as enterprise software front ends and self-service services, are growing rapidly. Nonetheless, the vast majority of chatbots are simple and rely on scripted responses in the decision tree. Related to chatbots are virtual agents, which are more extensive and complex, require more infrastructure and personnel to maintain, and are designed for longer-term relationships. Users outside of a single interaction will interact with hundreds of chatbots, but few virtual agents. Chatbot is a representative of artificial intelligence that will affect all areas of human communication today. Customer service is a typical area in which chatbots are already making an impact and have huge potential.

(2) top position
10. Artificial intelligence platform as a service
Cloud AI and machine learning Platform services, collectively known as AI Platform as a Service(PaaS), provide AI model building tools, apis, and related middleware to build, train, deploy, and use machine learning models running on pre-built infrastructure as cloud services. These services cover visual, speech, and any type of generic data classification and prediction model.
With leading cloud providers including Amazon, Google (Google), IBM and Microsoft all vying to be the platform of choice for customers. Over the past few years, ai applications using cloud services have continued to gain support and acceptance from data scientists and developers on the market. AI PaaS products focus on three key areas: machine learning, natural language processing and computer vision.
Cloud of artificial intelligence method is beginning to affect the data science and machine learning platform in the market, especially as the organization test and prototype build AI, with AI optimization of chips and a large number of dedicated storage data, make cloud the ideal environment for organizations to build and deploy the AI applications, without the need for traditional on-site procurement risk, cost and delay. In addition, the cloud provides packaged apis and tools that make it easier for developers to integrate AI capabilities into their applications.
AI PAAS products focus on machine learning, natural language processing and computer vision, three key AI combination services: first, machine learning (ML). The packaged ML services provided by AI cloud service providers unify end-to-end ML workflow. Second, natural language processing (NLP), using pre-trained NLP systems to create cloud-based chatbots for various use cases; Third is computer vision (CV), which can use face detection, recognition and analysis to unlock new image-based data sources. The combination of the above and cloud services will accelerate the expansion and derivation of digital business technology platform services in the short term.
11. Deep neural network dedicated chip
Deep neural network (DNN) dedicated chip is a special purpose processor that speeds up the system's computing speed. Deep neural networks (DNNs) are statistical models that detect and classify features in input data, such as sounds and images, or text (such as sentences). There are two stages in DNN system: in the training stage, DNN traverses the large data set and extracts it into a small set of DNN parameters; During the inference phase, DNN USES this parameter set to classify inputs, such as images, speech, or text. Today, the vast majority of training and reasoning tasks use GPU+DNN dedicated chips, which have higher performance and lower power consumption than cpus or GPU when accelerating neural networks.
Google has deployed dnn-specific chips at scale, called tensor processing units (TPU1, TPU2, TPU3), to provide business inferences such as voice and image recognition. TPU2 and TPU3 also speed up the data training process, a task previously entrusted to the GPU. Other proprietary chips are coming out, and Graphcore has developed a custom processor that provides extremely high performance for dnn-based applications. Marketing materials show that they nearly double the performance compared to gpus. Intel is also developing a proprietary integrated circuit code called "Lakecrest" based on new technology acquired from the Neurana system in 2016. The benefits of dnn-specific chips in performance and energy consumption are significant, and the widespread use of dnn-specific chips also requires the standardization of neural network architectures and support for different DNN frameworks.

12.Intelligent robots
Intelligent robots are mechanical and electrical agents working autonomously in the physical world, learning to solve problems in a short period of time from artificially guided training and demonstration, or through their supervisory experience at work. Intelligent robots can interact with humans using speech and language, and may even work with humans due to advanced sensory capabilities.
Far fewer intelligent robots have been adopted than industrial robots. In the past 12 months, we have seen some mature robot suppliers expand their product lines and new companies enter the smart robot market (especially from China). New technology providers and new technologies open, barriers to entry fall slightly.
In the past few years, smart robots are getting a lot of attention thanks to the efforts of several major suppliers: the amazon robotics division, formerly known as the Kiva system, has deployed smart robots in amazon warehouses. Google has bought several robotics companies. In early 2018, LG launched a series of smart robots for hotels, airports and supermarkets, a series of smart robots for commercial use. Several hotels in the United States and two shangri-la hotels in Singapore now use smart robots to provide room service.
Users of light manufacturing, distribution, retail, hospitality and healthcare facilities should see smart robots as an alternative and complement to their human resources and launch pilot projects aimed at assessing product capabilities and quantifying benefits. We need to review the current business and material processing processes for deploying intelligent robots, while considering a redesign of the process to provide a roadmap of three to five years for large-scale deployments. Intelligent robots will have initial business impact on a variety of asset-centric, product-centric and service-centric industries. They replace workers in these industries with higher reliability, lower costs, higher security and higher productivity. Typical and potential use cases include: medical material handling, prescription dispensing, patient care, direct material handling, inventory replenishment, finished product handling, product picking packaging, e-commerce order fulfillment, package delivery, shopping assistance, customer care, security.
13. Session user interface
User - machine interaction occurs primarily in the user's spoken or written natural language. Often informal and two-way, these interactions range from simple words to very complex interactions, followed by highly complex interactions. As a design model, CUI relies on applications and related services, as well as session platforms.
In recent years, CUI has exploded, with chatbots, messaging platforms and virtual assistants, particularly Home speakers such as the Amazon Echo and Google Home, all contributing to the growth of conversational user interfaces.
14. Intelligent application
Smart applications are embedded or integrated ai technologies that replace artificial activities with intelligent automation and improved decision support. Ai has become the next major battleground, with each application and service expected to integrate ai to some extent over the next few years. Enterprise application providers began to embed ai technology in their products and introduce ai platform features, from ERP to CRM to HCM to workforce productivity applications. AI has the potential of organizational change and is the core of digital business. Back-end enterprise applications are an important part of this transformation effort because they provide a digital base on which most efforts are made. In the context of many familiar application categories, AI will run unobtrusively while producing entirely new applications.
15. Digital ethics
Digital ethics includes value systems and ethical principles for electronic interaction, and the use and sharing of data between people, businesses, governments and things. The scope of digital ethics is very broad, including security, cybercrime, privacy, social interaction, governance, free will and other aspects of economic society. Digital ethics jumped to the top of inflation expectations due to recent negative media publicity, increased public discussion and increased awareness of regulations such as data privacy protection. Current themes, such as "" artificial intelligence," "" "fake news "" and" "digital society," "are all triggers for the explosion of discussions about digital ethics.
16. Graphical analysis
Graphical analysis is a set of analysis techniques that allow exploration of relationships between interested entities such as organizations, people, and transactions. Graphical analysis consists of models that determine "connectivity" across data points to create data nodes, clusters, and their demarcation points. Nodes are connected explicitly or implicitly, indicating the level of impact, the frequency of interaction, or the probability.
Graphical analysis techniques are steadily climbing to the top of inflation expectations, with a growing number of people adopting graphic analysis, mainly because of the need to find insight in a large number of exponentially heterogeneous data and the need for analysis. Once the highly complex model is developed and trained, the output is easier to store due to the extended capabilities and computing power, and a graphics database is adopted to provide an ideal framework for the storage, operation and analysis of graphics.
Unique ways of storing and processing data in many graphical databases, combined with the need for new skills related to knowledge specific to graphics, may limit the growth in usage. For example, the knowledge and experience of the resource description framework (RDF), the SPARQL protocol and the RDF query language (SPARQL), as well as emerging languages such as apache tinkerop or recently open source passwords.
Data and analysis leaders should evaluate the opportunity to incorporate graphical analysis into their analysis portfolio and strategy. This will enable them to address high-value use cases that are not well suited for traditional sql-based queries and visualizations (such as computing and visualizing shortest paths, or relationships and impacts between two nodes or entities in the network). They should also consider using chart analysis to enhance schema analysis, where users can interact directly with graphical elements to discover insights, and analysis results and output can be stored for reuse in a graphical database.
The business conditions that constitute the ideal analysis framework include: path optimization, market basket analysis, fraud detection, social network analysis, CRM optimization, location intelligence, supply chain monitoring, load balancing and other special forms of labor force analysis, such as enterprise social graph and digital workplace map, recent, frequency, etc.
In law enforcement investigation, epidemiology, genome research, money laundering detection and other aspects, graphic analysis is very effective in assessing risk to analyze fraud, path optimization, clustering, isolated point detection, markov chain, discrete event simulation and other aspects. The engine used to expose fraud and corruption can also be used to identify risks within an organization and answer accountability questions in a proactive manner. One recent example of identifying networks is the international federation of investigative journalists. In contrast, graph analysis is a new "lens" for exploring direct and indirect relationships across multi-structure data.
Graphics processing is the core of many other advanced technologies, such as virtual personal assistants, intelligent consultants and other intelligent machines. Graphical analysis can extend the potential value of data discovery in modern business intelligence and analysis platforms. Once graphical processing is complete, it can extend the potential value of data discovery in modern business intelligence and analytics platforms. Visualization-use size, color, shape, and direction-to represent relationships and node attributes.
17. Target analysis
Goal analysis refers to a set of analytical capabilities that specify the preferred course of action to meet the pre-defined goals. The most common prescriptive analysis approach is the optimization approach (such as linear programming), which is the predictive analysis and rule, heuristic, and decision analysis approach (such as impact diagrams). Prescriptive analysis differs from descriptive, diagnostic, and predictive analysis because the output is a recommended (and sometimes automated) operation.
Although the concept of optimization and decision analysis has been around for decades, but as people to scientific data, better algorithm, based on the cost benefits of cloud computing ability and more understanding and research of the available data, a better approach to application gradually, common cases such as customer processing, the classification of the loan examination and approval, claims, and many optimization problems, such as supply chain or network optimization and scheduling. Objective analysis can also be a factor of business differences in the planning process, whether it is financial planning, production planning or distribution planning, to assist users in exploring various schemes and comparing recommended action plans.
18. Deep neural network (deep learning)
Deep neural networks are large-scale neural networks that typically have many processing layers that enable computers to process much more complex data such as video, image, voice and text data, supporting the latest advances in artificial intelligence. The Internet giants deploy dnn-based systems across their respective lines of business, such as amazon Alexa's voice-to-text capabilities, Google's search capabilities, image recognition and self-driving cars, and Facebook's face recognition technology.
Building and training systems is not easy, or even difficult. To get consistently good results, you need a lot of tag data, data science expertise, and specialized hardware. Most businesses struggle to get enough markup data to support their innovations. In addition, data science experts are scarce, and because IT and Internet giants are actively well-paid to hire, IT is difficult for ordinary businesses to get good talent in this area. In addition, the optimization and upgrading of computing resources also require a large amount of capital expenditure.
The most widely used are convolutional neural networks (CNNs) and recursive neural networks (RNNs), and CNN is used for image classification and text speech. The level of hype surrounding DNNS is not much different from last year. These technologies can help them solve previously intractable classification problems, especially those related to images, video and speech. Considerable resources have been invested in image, speech and facial classification systems, as well as training and data.
DNNS has the potential to be transformative and disruptive for all industries, and the challenge of trying to leverage DNNS first and foremost is to identify the business problems to be addressed and ensure that there are enough experts and reasonably good data sets. In detecting fraud, determining quality, predicting requirements, and other classification problems involving sequences (e.g., using video, audio, or time series analysis), DNNS showed greater accuracy than previous advanced algorithms.

19. Wireless speakers that support VPA
The cloud-enabled remote field voice capture device connects users to virtual personal assistant (VPA) services such as Alexa, Google assistant, Siri, Cortana, WeChat, and more. With the advent of the screen-enabled VPA speaker in 2017, multimode interactions were introduced into the VPA experience. Although the dialogue experience provided by VPA is still far from perfect, consumers have adopted VPA speakers more than expected.
20. Machine learning
Machine learning USES mathematical models that extract knowledge and patterns from data to solve business problems. There are three main sub-disciplines related to the type of observation provided: supervised learning, where observations contain input/output pairs (also known as "marker data"); Unsupervised learning (omitting labels); Reinforcement learning (in some cases, assessing how good or bad the situation is).
Machine learning remains one of the hottest concepts in technology because of its widespread impact on businesses. The driving factors for the continuous large-scale growth and adoption of machine learning are the increasing amount of data and the complexity that traditional engineering methods cannot handle.
More and more organizations are exploring use cases for machine learning, and many are already in the initial stages of experimental learning. Most organizations are still experimenting with their machine learning methods, and finding the relevant roles and skills needed to execute machine learning projects is a challenge for organizations like this one. As the amount and source of data increases, so does the complexity of the system, in which case traditional software engineering approaches have a lower impact. In the future, many industries cannot progress without machine learning.
The selection of machine learning algorithms is also questioned as to whether the algorithms can be interpreted. Set up a (virtual) team to sort the machine learning use cases and build the best assessment model to push the most valuable use cases into production. Data is the "fuel" of machine learning. Data is the unique competitive advantage of each organization, and high quality data is the key to the success of machine learning. Although the selection of basic machine learning algorithms is quite limited, the number of changes in the algorithm and the available data sources are enormous. Machine learning drives improvements and new solutions to a wide range of business and related problems.
21.Natural language processing
Natural language processing (NLP) provides a direct form of communication between humans and systems, that is, NLP includes computational language techniques designed to parse, interpret (and sometimes even generate) human languages. NLP technology deals with the pragmatic (context), semantic (meaning), grammatical (grammar) and lexical (word) aspects of natural language. Speech is usually reserved for speech processing technologies, which are essentially signal processing systems.
The use of enterprise NLP is also increasing as capabilities are improved, as well as new use cases based on session proxies and automated machine translation. The existing grammatical and semantic approach is increasingly replaced by the deep neural network (DNNs) approach.
Human language is complex, and while NLP solutions have made progress, there are still many nuances and nuances that require human intervention to be properly explained. DNNs are experimental and fragile, and the ability to understand reasoning context and synthesize is not entirely satisfactory. Many NLP solutions require experts to ensure consistent accuracy of syntax and models.
NLP offers businesses significant opportunities to improve their operations and services. For many businesses, NLP's strongest and most direct use cases relate to improved customer service (impact cost, service level, customer satisfaction, and sales) and employee support (including making them smarter and more effective at work). To accelerate their NLP implementation, enterprises should develop new functional modules to implement specific skills. Given the increasing use of data science and technology in NLP applications, improving the skills of data scientist talent may also be necessary.
Finally, NLP solutions will offer different capabilities for knowledge-based integration, content mapping, search enhancements, and text summaries. Therefore, enterprise developers should test and verify the effectiveness of these solutions before making major decisions. If an enterprise invests in specialized syntax, it should be noted that these solutions are compatible between vendors.

22.Robot application automation software
Robot application automation software is a combination of user interface recognition technology and workflow execution technology. It can simulate human using screen and keyboard to drive the application and perform system-based work. It is also designed to automate the application.
Robotic application automation software is a "glue" type of technology that allows you to glue systems together. To accomplish activities that are more complex than automated, you need to be able to read handwritten or structured data, unstructured data, or processing activities performed by chatbot or machine learning activities.
23. Virtual assistant
Virtual assistants (VAS) help users or businesses complete a set of tasks that previously could only be done by humans. VAS USES artificial intelligence and machine learning (such as natural language processing, predictive models, personalized services) to help people and automate tasks. VAS listens and observes behavior, builds and maintains data models, and anticipates and recommends operations. Operations can be deployed in multiple use cases, including virtual personal assistants, virtual customer assistants, and virtual employee assistants. VAS applications are increasingly dominated by session interfaces such as apple's Siri, Google assistant, Microsoft's Cortana, IPsoft's Ameliia, Amazon's Alexa and IBM's Watson assistant.
24. Cognitive computing
Cognitive computing is a technology that improves performance in a wide range of cognitive tasks. These systems are interactive and iterative, recalling previous interactions. They are also context-sensitive and can adapt to changes in information and goals. We recognize that "cognitive computing" is an overused promotion term by vendors in today's market. Cognitive computing quickly climbed to the top of inflation expectations as major vendors popularised and hyped the term in the latest generation of ai markets.

(三)The sliding stage
25. The FPGA accelerator
The FPGA accelerator is a server-based reconfigurable computing accelerator that provides extremely high performance by accelerating programmable hardware-level application processing. The FPGA accelerator has a large number of programmable logic blocks, reconfigurable interconnections, and storage subsystems that can be configured to accelerate specific algorithmic functions. The FPGA processor can unload tasks from the main system processor. The FPGA is not programmed in a common application language, but instead configures a circuit design language "VHDL" different from the typical programmer. "VHDL" is a language that most software engineers have difficulty learning, which makes FPGA programming more difficult. In data centers, FPGA can be used to apply consistent processing operations to large amounts of data, such as high-frequency trading (HFT). Microsoft is using FPGAs for search analysis, while Edico genomics' Dragen bio-it platform based on FPGA can achieve high-performance genomic sequencing workflows.
FPGA is typically configured using a hardware programming language (such as RTL and VHDL) and is very complex to use, which prevents widespread adoption. However, major FPGA vendors (Intel and Xilinx) are working to address this through libraries and toolsets, enabling the FPGA to be configured using a software-centric programming model. Adopting the FPGA has also become easier with the help of new frameworks such as OpenCL. OpenCL reduces the time and skill required to use the FPGA. Emerging workloads such as deep learning (reasoning) are sparking interest in FPGAs. Intel's integration of FPGAs with mainstream server cpus, the development platform represented by Amazon Web services (AWS) FPGA, became easier to access, and also pushed FPGAs to be adopted by data centers.
Today, the biggest growth opportunity for FPGA in data centers is applied to the reasoning section of deep learning. Considering the nature software evolution of ecosystem, the FPGA accelerator can be realized in a relatively small energy consumption trajectory dramatic performance improvements, determine the use of FPGA can produce effect meaningful application, assessment for data center server deployment (based on the FPGA PCIe add-ons) the usability of the hardware, or use with FPGA processor integration server. Accelerate development using cloud-based FPGA services. The FPGA is well suited for ai workloads because they are excellent in low precision (8-bit and 16-bit) processing capabilities. But programmability remains a major challenge, limiting the widespread use of the FPGA. Enterprises should evaluate fpga-based solutions for genome sequencing, real-time transaction, video processing and deep learning (reasoning). Leaders can provide FPGA by using cloud-based infrastructure.

26.Computer vision
Computer Vision, Computer Vision, CV for short) is a kind of involves the acquisition, processing and analysis in the real world images and video, the machine can be extracted from the physical world in a meaningful context information process, including the machine Vision, optical character recognition, image recognition, pattern recognition, face recognition, edge detection and motion detection and so on many different and important technology in CV.
Algorithms and models for solving visual problems have been around for more than half a century, and the emergence of deep neural networks, the availability of large amounts of data, and large-scale parallel processors has injected new vitality into the development of the field of computer vision, supporting supervised and unsupervised learning, identification, classification, prediction and operation. Thirty years ago, object sorting was a difficult manual task. The results of the ImageNet challenge over the past eight years are the best evidence of how far the field has come. The error rate dropped by 30 percent. The development of computer vision benefits from :(1) the technological maturity of DNNS and related artificial intelligence technologies; (2)CVS is widely applicable to many fields such as robot, automatic vehicle, unmanned aircraft, augmented reality and virtual reality. (3) most enterprises face challenges in how to process all the images collected, video data and how to automatically process these image data. People actively overcome these difficulties and challenges. (4) computer vision is a special use case and natural extension of the Internet of things (iot), which is an external sensor extending and extending the range of the Internet of things (iot).
Vision is an excellent complement to other sensor data, such as geo-location, inertia and audio. As a result, it also enhances humans' ability to interact with the digital and physical worlds. This has sparked widespread interest in the subject, such as autonomous vehicles, robots, unmanned aircraft, augmented mixing and virtual reality - security, biometrics and more.
27. Predictive analysis
Predictive analytics is a high-level analytical approach that answers the following questions by examining data or content: "what will happen?" Or, more accurately, "what could happen?" As the application of artificial intelligence, regression analysis, multivariate statistics, pattern matching, prediction modeling and other technologies are adopted in most predictive analysis.

28. Autonomous driving
Autonomous or autonomous vehicles can use a variety of on-board sensing and positioning technologies such as lidar, camera, GPS and map data without human intervention to navigate and drive a specific location or a specific location in combination with the decision-making ability based on artificial intelligence. Currently, driverless cars are in the spotlight.
Over the past year, there have been some signs of a trough of disillusionment. In early 2018, there were several accidents related to autonomous vehicles, including the death of a pedestrian. Some of the agencies' previous claims that the driverless milestone quietly passed without delivering on its promise were, in fact, unrealistic and exaggerated.
Artificial intelligence (AI) is the key technology to realize automatic vehicle, and the development of automatic vehicle machine learning algorithm is also accelerating. The main challenges of implementing autonomous vehicles remain focused on reducing technological and industrial costs, but also increasingly include regulatory, legal and social considerations, such as the impact of operating permits, liability, insurance, and person-to-person interactions.
Autonomous vehicle technology has destructive potential not only in intelligent mobility and logistics, but also in shipping, mining, agriculture, industry, security and military operations. Continuous advances in sensing, positioning, imaging, guidance, mapping and communications technologies, coupled with ai algorithms and high-performance computing capabilities, have brought autonomous vehicles closer to reality. In 2018, however, the complexity and cost challenges remain large, affecting reliability and affordability requirements.
The adoption of self-driving technology will still be developed at three different stages: assisted driving, semi-autonomous and fully driverless vehicles. Each stage needs to be more and more technical maturity and reliability, these technical maturity and reliability depends on the intervention of the enterprise, car companies, service providers, government and technology suppliers (for example, software, hardware, sensors, map data and network provider) should be performed on a joint research and investment cooperation, in order to promote the technology needed to, and the legislative framework to carry out the work of automatic driving.
In addition, it is vital that the education public understands the benefits of self-driving cars. Self-driving cars will have a disruptive impact on some jobs, such as bus, taxi and truck drivers. Self-driving vehicles enter mobile computing systems, providing an ideal platform for consuming and creating digital content, including location-based services, vehicle-centric information and communications technologies. Self-driving cars are also part of mobile innovation and new transportation services that have the potential to disrupt established business models. Self-driving cars, for example, will eventually bring new products that highlight on-demand services by enabling driverless cars to pick up passengers when needed. Self-driving cars will bring significant social benefits, including reducing accidents, injuries and deaths, as well as improving traffic management, which could affect other socio-economic trends. For example, if people can use their travel time to work or have fun while driving a self-driving car, it's less critical to live near downtown and close to work, which could slow down urbanisation.
29.Commercial drones
Commercial unmanned aerial vehicles refer to small helicopters, fixed-wing aircraft, multi-rotor aircraft and hybrid aircraft, in which there are no human pilots. They are either remotely controlled by human pilots on the ground or equipped for autonomous navigation. Unlike their military counterparts, they are used for commercial purposes.
Commercial drones enter the trough of disillusionment in 2018 In a technical sense, such drones are relatively mature and capable of carrying out increasingly complex missions. However, their adoption is often hampered by restrictions, particularly in the case of unmanned aircraft over line-of-sight, above personnel or in restricted airspace, such as close to airports, which are the types of operations that are heavily regulated in most countries. In addition, the high cost of vertically specialized end-to-end drone solutions, including equipment, support software, and flight operations, has hampered large-scale use by end users. Gartner expects commercial drones to shrink to the limit within two years, provided regulatory conditions and certain technical elements do not improve as expected. In particular, autonomous flights will boost the market, but their introduction will require both regulatory reform and technological progress. Overall, the enterprise drone program should have short - and long-term goals. One such example is the United States' low altitude authorization and notification capability (LAANC) initiative to accelerate waiver approval for flights in restricted airspace. Today, leading users include aerial photography, mapping and measurement, volume measurement, and remote inspection. The consumer should also consider how to best utilize the captured data.
Most importantly, commercial drones can improve the ability of characters such as surveyors, inspectors, drivers and photographers who traditionally perform labor-intensive tasks when they may not be safe. As a result, drones boost productivity by reducing or redeploying personnel, while being able to access data in real time and improve employee security. Commercial drones, for example, can add value to sectors such as agriculture, construction, emergency services and extractive industries. In most verticals, the value of commercial drones is to reduce operating expenses and improve safety, but there are also opportunities to generate revenue in industries such as film photography.

30. Augmented reality
Augmented reality (AR) is the real-time use of information in the form of text, graphics, audio and other virtual enhancements, integrated with real-world objects and rendered using overhead display or projected graphics overlays. It is this "real world" element that distinguishes AR from virtual reality. AR is designed to enhance the user's interaction with the environment, rather than separate them from the environment.
31. Knowledge management tools
Create, modify, and access IT repositories using knowledge management (KM) tools. KM tools are typically linked to portals that support self-service so that end users can access the relevant knowledge assets themselves. These products are defined by their ability to combine, store, and access information about IT and non-it services. The KM tool can be used as a stand-alone option or as an integrated component of a broader IT service management tool. KM provides unexploited potential for many IT organizations to optimize, drive efficiency and achieve economies of scale in IT systems.

(四)Climbing stage
32.Virtual reality
Virtual reality (VR) provides a computer-generated 3D environment that surrounds the user and responds to individual actions in a natural way, usually via an immersive headset display (HMDS). Gesture recognition or handheld controllers provide hand and body tracking, and tactile (or touch-sensitive) feedback can be combined. Room-based systems can provide a 3D experience when moving across a wide area or can be used with multiple participants. Immersive vr applications are more advanced than other graphical simulations.
(五)Get into better shape
33.GPU accelerator
GPU is a highly parallel floating-point processor designed for graphics and visualization work. Over the past decade, NVIDIA has worked with other businesses to add programmable capabilities to gpus, enabling applications to access deep, fast floating-point resources. GPU also has very high bandwidth storage subsystems. These capabilities provide significant performance improvements for many highly parallel, repetitive, and computationally intensive applications.
Gpu-intensive applications include applications to molecular dynamics, computational fluid dynamics, financial modeling and geospatial technologies. Programming gpus can be challenging because execution order and code optimization are critical. We expect Dnn technology to mature quickly with the support of open frameworks from large cloud providers, including TensorFlow, TORCH, Caffe, apache mxnet, and Microsoft cognitive tools. GPU accelerated computing can provide extreme performance for highly parallel computing intensive work in HPC and DNN training. The speed of completion is high. Cloud GPU transfers graphics computing processing from the site to the cloud. High-performance computing and deep learning are essential to many digital business strategies, and traditional enterprise ecosystems based on CPU are inadequate due to the rapid increase in workload. Use mature GPU technology to select HPC applications and deep learning infrastructure. The challenge of programmability has been largely solved in the GPU with architectural support such as cuda.i.
34. Integrated learning
Integrated learning is a machine learning algorithm in which a set of prediction models are established and their output is combined into a single output of the whole group. This approach draws heavily on the "wisdom of crowds" principle, in which the diversity of opinion or model output is key. The adoption of integrated technologies continues to grow steadily. All major data science providers offer this technology as part of their mix. Integrated learning has become a method widely used by data scientists and citizen data scientists.
Almost all predictive analysis use cases and machine learning tasks can benefit from the application of integration technology. The success stories of applied technology continue to enhance the reputation of integrated learning to improve predictive accuracy, and integration methods are often applied to analysis competitions, such as the KDD cup and Kaggle contest, and perform well.
35. Speech recognition
Speech recognition technology is to transform human speech into text for further processing. Over the past three years, the performance of speech recognition has improved rapidly. IBM, Microsoft, Google (Google), Amazon (amzn) and Baidu (bidu) have all shown rapid technological advances in 2016-2017, claiming to be performing at par with human transcription.
In 2018, Google improves performance of the cloud-to-text API by providing multiple machine learning models to adapt to different use case environments (such as telephones, voice commands, video) and improving punctuation to improve transcription readability.
Along with advances in algorithms, voice-to-text applications are being driven by advances in hardware, and the adoption of conversational agents (such as chatbots and virtual assistants) allows businesses and consumers to use voice interaction on virtual personal assistant speakers such as smartphones and game consoles. The use of voice to text technology is also growing, for connected smart homes and cars, and for embedded solutions running on edge devices without the need for the cloud to support new usage scenarios.


Author: meng haihua, associate researcher and doctor, industrial innovation laboratory, Shanghai institute of science

From Strategic cooperation frontier

Exhibitor News
Contact Us
  • ShangHai Lingling Road 583 Offshore oil Mansion

  • +86 135 0188 5107

  • +86 21-34241826

  • davy.chen@secma.org.cn

Copyright © All Rights Reserved by NEAIM2019 international new energy automotive industry and intelligent manufacturing (Shanghai) exhibition © gill convention. All rights reserved.  沪IPC备19995215  Technical Support:YuanyuanNet
Count down