This article provides an overview of one-time and scheduled data imports (i.e. importing users from a CSV text file). This article requires that you are already familiar with the basics of Absorb LMS, and have a good idea of how to manage the objects you want to import individually through the admin portal. I.e. you know how to create/manage a single user before trying to import them in bulk. Please note that these imports are an additional service that typically involves an additional fee and technical resources on both the Absorb, and client sides. Interested? Please get in touch with your Client Success Manager.
If you are not familiar with Absorb, we recommend that you read through our Knowledge Base and/or the Absorb Academy. If you are just looking to import users in bulk, we also have a standard User Import tool available within the Admin UI for all clients.
If you are not looking to import data from a file, you may be interested in building an integration using our RESTful API. Please note that use of the API requires competent technical web experience on your part, but this guide should serve as a jumping off point for those interested.
- Historical Data Imports
- Scheduled Data Imports
- File Format
- Objects to Import (Download: Absorb Data Mapping)
Historical Data Imports
Absorb has been designed to allow data to be imported from other Learning Management Systems (LMS). This allows LMS clients new to Absorb to continue making use of their historical information for both reporting and user experience purposes as soon as they start using our product. These types of data imports are done only once, usually during the onboarding process for new Absorb clients, to allow organizations a seamless transition from one system to another.
Important: Historical data imports should be completed BEFORE leaderboard points are configured in the LMS, unless the you want points issued for all historical enrollments.
Scheduled Data Imports
Data imports can also be run on a scheduled basis, usually to sync data automatically from an external system into Absorb. Files are uploaded to an FTP site, and then processed by Absorb at a pre-determined frequency. For example, we can automatically process a file containing new or updated User records each night at 3:00 AM. The client's HRIS, or other system, would automatically export the file(s) which would drop onto Absorb's FTPS each night, prior to the 3:00 AM processing.
Tip: We will provide the FTPS server and account details as part of the implementation. If you require the files to be picked up from your own secure FTP server, please discuss and confirm this with the contact working with you at Absorb. You can find more information in our FTP Support for Integrations article.
The main difference between Historical and Scheduled Data Imports is whether the import is happening once or on a recurring basis. The capabilities, configuration and process are almost identical.
During the initial scoping call, we will discuss requirements and timelines, as well as walk you through the steps of an import process. The first major deliverable that we typically work towards is obtaining a sample file set, along with your choices for any associated configuration options. Once received, our development team reviews your sample file set, and proceeds to import the data into a sandbox environment. For that reason, it is very important that the sample file set is provided in the exact final format that your final production data will be provided in. Any change to the data format after import work begins can result in delays and/or added cost.
Tip: It's usually best to provide real data in your sample files instead of mock data. This allows us to identify any data quality issues early on in the process.
Here are the steps we will walk you through to complete a data import:
- Scoping call and confirmation of requirements
- Creation of a Statement of Work (if needed)
- Client sample files, column mapping, and configuration options received
- Import sample data to sandbox
- Data revision and re-import to sandbox (if needed)
- FTP credentials exchanged and file transfer tested (more info here)
- Client review and testing approval
- Final data files provided (generally just prior to, or after launch)
- Final import to sandbox
- Final approval from client
- Final import to production
- Final client review and confirmation
- Scheduling of recurring import (if applicable)
Files should be provided in CSV (Comma Separated Values) format. The Instructions tab of the Data Mapping Spreadsheet provides further directions for naming and creating your CSV files. If you're unfamiliar with the CSV format, we recommend reading our, "What is a CSV file and how do I save my spreadsheet as one?"
Depending on what is being imported (e.g Users, Departments, Courses, etc.) there will be certain configuration options available. These options range from a simple name for the recurring import, to the email address(es) that should receive error notifications, to special logic for processing admin users.
See our Data Imports - Configuration article for more information.
Objects to Import
The Absorb Data Mapping document contains a list of all objects that can be imported (one object per tab), along with their associated fields, dependencies, etc. Use this document to decide which fields you would like to import for each object, and to aid in mapping columns from your file to Absorb field names.
Some of the more commonly imported objects have special configuration that can be applied as part of the import - see the helpdesk articles for those objects below. If an object is listed in the Absorb Data Mapping document but not in the list below, there are no additional configuration options that need to be considered beyond the generic ones noted in the configuration article.
Tip: One element you will repeatedly see reference to is the ExternalId. This field exists as a means to uniquely identify and link related LMS records. The value of this special field can either be ‘real’, as taken from some external system, or they can be purpose-generated for the sole use of your Absorb data import.
An example would be importing Users and their Course enrollments into Absorb, where the Course Enrollment table's User_ExternalId would refer to the User table's ExternalId, thereby relating the two elements during the import.