000 04535nam a22006135i 4500
001 978-3-031-23190-2
003 DE-He213
005 20240507154521.0
007 cr nn 008mamaa
008 230523s2023 sz | s |||| 0|eng d
020 _a9783031231902
_9978-3-031-23190-2
024 7 _a10.1007/978-3-031-23190-2
_2doi
050 4 _aQA76.9.N38
072 7 _aUYQL
_2bicssc
072 7 _aCOM073000
_2bisacsh
072 7 _aUYQL
_2thema
082 0 4 _a006.35
_223
100 1 _aPaaß, Gerhard.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
245 1 0 _aFoundation Models for Natural Language Processing
_h[electronic resource] :
_bPre-trained Language Models Integrating Media /
_cby Gerhard Paaß, Sven Giesselbach.
250 _a1st ed. 2023.
264 1 _aCham :
_bSpringer International Publishing :
_bImprint: Springer,
_c2023.
300 _aXVIII, 436 p. 125 illus., 112 illus. in color.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aArtificial Intelligence: Foundations, Theory, and Algorithms,
_x2365-306X
505 0 _a1. Introduction -- 2. Pre-trained Language Models -- 3. Improving Pre-trained Language Models -- 4. Knowledge Acquired by Foundation Models -- 5. Foundation Models for Information Extraction -- 6. Foundation Models for Text Generation -- 7. Foundation Models for Speech, Images, Videos, and Control -- 8. Summary and Outlook.
506 0 _aOpen Access
520 _aThis open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models. After a brief introduction tobasic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI.
650 0 _aNatural language processing (Computer science).
650 0 _aComputational linguistics.
650 0 _aArtificial intelligence.
650 0 _aExpert systems (Computer science).
650 0 _aMachine learning.
650 1 4 _aNatural Language Processing (NLP).
650 2 4 _aComputational Linguistics.
650 2 4 _aArtificial Intelligence.
650 2 4 _aKnowledge Based Systems.
650 2 4 _aMachine Learning.
700 1 _aGiesselbach, Sven.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
710 2 _aSpringerLink (Online service)
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783031231896
776 0 8 _iPrinted edition:
_z9783031231919
776 0 8 _iPrinted edition:
_z9783031231926
830 0 _aArtificial Intelligence: Foundations, Theory, and Algorithms,
_x2365-306X
856 4 0 _uhttps://doi.org/10.1007/978-3-031-23190-2
912 _aZDB-2-SCS
912 _aZDB-2-SXCS
912 _aZDB-2-SOB
999 _c37458
_d37458