Building Transformer-Based Natural Language Processing Applications Training

  • Learn via: Classroom
  • Duration: 1 Day
  • Price: Please contact for booking options
We can host this training at your preferred location. Contact us!

Applications for natural language processing (NLP) have exploded in the past decade. With the proliferation of AI assistants and organizations infusing their businesses with more interactive human-machine experiences, understanding how NLP techniques can be used to manipulate, analyze, and generate text- based data is essential. Modern techniques can capture the nuance, context, and sophistication of language, just as humans do. And when designed correctly, developers can use these techniques to build powerful NLP applications that provide natural and seamless human-computer interactions within chatbots, AI voice agents, and more.
Deep learning models have gained widespread popularity for NLP because of their ability to accurately generalize over a range of contexts and languages. Transformer-based models, such as Bidirectional Encoder Representations from Transformers (BERT), have revolutionized NLP by offering accuracy comparable to human baselines on benchmarks like SQuAD for question-answer, entity recognition, intent recognition, sentiment analysis, and more.
In this workshop, you’ll learn how to use Transformer-based natural language processing models for text classification tasks, such as categorizing documents. You’ll also learn how to leverage Transformer-based models for named-entity recognition (NER) tasks and how to analyze various model features, constraints, and characteristics to determine which model is best suited for a particular use case based on metrics, domain specificity, and available resources.

Assessment type:


Skills-based coding assessments evaluate students’ ability to build an NLP task, including a neural module pipeline and training.
Multiple-choice questions evaluate students’ understanding of the NLP concepts presented in the class.

Certificate:


Upon successful completion of the assessments, participants will receive an NVIDIA Deep Learning Institute certificate to recognize their subject matter competency and support professional career growth
Why Choose NVIDIA Deep Learning Institute for Hands-On Training?
  • Access workshops from anywhere with just your desktop/laptop computer and an internet connection. Each participant will have access to a fully configured, GPU-accelerated workstation in the cloud.
  • Obtain hands-on experience with the most widely used, industry-standard software, tools, and frameworks.
  • Learn to build deep learning and accelerated computing applications for industries, such as healthcare, robotics, manufacturing, accelerated computing, and more.
  • Gain real-world expertise through content designed in collaboration with industry leaders, such as the Children’s Hospital of Los Angeles, Mayo Clinic, and PwC.
  • Earn an NVIDIA Deep Learning Institute certificate to demonstrate your subject matter competency and support your career growth.

  • Experience with Python coding and use of library functions and parameters
  • Fundamental understanding of a deep learning framework such as TensorFlow, PyTorch, or Keras
  • Basic understanding of neural networks

  • Understand how text embeddings have rapidly evolved in NLP tasks such as Word2Vec, recurrent neural network (RNN)-based embeddings, and Transformers
  • See how Transformer architecture features, especially self-attention, are used to create language models without RNNs
  • Use self-supervision to improve the Transformer architecture in BERT, Megatron, and other variants for superior NLP results
  • Leverage pre-trained, modern NLP models to solve multiple tasks such as text classification, NER, and question answering
  • Manage inference challenges and deploy refined models for live applications

Introduction
  • Meet the instructor.
  • Create an account at courses.nvidia.com/join
Introduction to Transformers
Explore how the Transformer architecture works in detail:
  • Build the Transformer architecture in PyTorch.
  • Calculate the self-attention matrix.
  • Translate English to German with a pre-trained Transformer model
Self-Supervision, BERT, and Beyond
Learn how to apply self-supervised Transformer-based models to concrete NLP tasks using NVIDIA NeMo:
  • Build a text classification project to classify abstracts.
  • Build a named-entity recognition (NER) project to identify disease names in text.
  • Improve project accuracy with domain-specific models.
Inference and Deployment for NLP
Learn how to deploy an NLP project for live inference on NVIDIA Triton:
  • Prepare the model for deployment.
  • Optimize the model with NVIDIA®
  • Ten s or R T™
  • Deploy the model and test it.
Final Review and Next Steps
  • Review key learnings and answer questions.
  • Complete the assessment and earn a certificate.
  • Take the workshop survey.
  • Learn how to set up your own environment and discuss additional resources and training.



Contact us for more detail about our trainings and for all other enquiries!

Upcoming Trainings

Join our public courses in our Istanbul, London and Ankara facilities. Private class trainings will be organized at the location of your preference, according to your schedule.

Classroom / Virtual Classroom
21 October 2024
Istanbul, Ankara, London
1 Day
Classroom / Virtual Classroom
27 October 2024
Istanbul, Ankara, London
1 Day
Classroom / Virtual Classroom
01 November 2024
Istanbul, Ankara, London
1 Day
Classroom / Virtual Classroom
15 November 2024
Istanbul, Ankara, London
1 Day
Classroom / Virtual Classroom
15 November 2024
Istanbul, Ankara, London
1 Day
Classroom / Virtual Classroom
20 November 2024
Istanbul, Ankara, London
1 Day
Classroom / Virtual Classroom
22 November 2024
Istanbul, Ankara, London
1 Day
Classroom / Virtual Classroom
25 November 2024
Istanbul, Ankara, London
1 Day
By using this website you agree to let us use cookies. For further information about our use of cookies, check out our Cookie Policy.