Search code examples
axaptadynamics-ax-2009data-importdata-export

AX 2009 Import/Export via Administration vs table preservation for data refresh


I'm just a SQL guy, so forgive me is this is a dumb question, or in the wrong place.

My instructions for a data refresh from production to the lower environments have me going into the AX client and backing up 8 tables to a file via pointy-clicky mouse action:

  • EmplTable
  • SysUserProfiles
  • SysUserInfo
  • SysCompanyUserInfo
  • DirPartyTable
  • EmployeeTypes
  • UserGroupList
  • UserInfo

Then, I stop the AOS's and do what I do best -- use some SQL to copy data out of 31 tables and into a safe place, do the database restore, copy the data back into the 31 tables from the safe place. Then Start the AOS's, sync the dictionaries, and finally import the 8 tables via the AX client (all pointy-clicky like).

I can sync from the command line. I can preserve 31 tables via sql from the command line. I can start and stop AOS's in the right order from the command line. But I can't do this AX Export/Import Tables step and with three environments all that pointy-clicky angers me.

Question, Finally: What happens on export/import in the client, if anything, that wouldn't be covered by just dealing with those 8 tables like the other 31? Is it something a dict sync wouldn't fix? Is there a way to tell what it might also do? Is there a simple way in X++ to code the export/import? I already kind of have a grasp on sysStartupCmd and how to get stuff to happen from ax32.exe -StartupCmd , but honestly SQL is so much easier for me, being my native tongue.

Thanks for reading all that!


Solution

  • The process you described seems somewhat unusual for an update process, but it makes sense in certain contexts.

    As it was stated in a comment, you are basically preserving data for your lower environment. The process you described in the comments updates existing records with the old data, while keeping the new data. If you wanted to attempt this via SQL, you would need to write it as an upsert (as new records may be created while existing records would be updated). It looks like the tables are all related to users in some manner, so this process would make sense if there is a different set of users from production to your lower environment, which is not unusual, especially in a SOX regulated environment. A Data Dictionary sync wouldn't really do anything in this situation - that basically just updates the table schema in SQL to match the definition stored in AX. Since that schema hasn't changed, nothing would happen.

    With the other (31) tables you mention, it sounds like you're doing more of a table replacement (so the table is truncated prior to inserting the new data, or the data from the restore process is otherwise discarded/ignored), which is why the process may be different from the 8 user tables.

    In any case, to answer your question directly, I don't know of any out-of-the-box process that could automate this for you. The StartupCmd= flag, would be the logical place to look for this, but the only import command available is for XPO (code) files. If you wanted something to automate the process, you would need to create a new class that extends SysStartupCmd and would call the same functions you would normally do manually. This wouldn't be difficult, but would take a little bit of research to determine the commands and parameters used. You can look at the SysStartupCmdAOTImport class to get an idea of how it would be structured. The import process seems to be controlled from the SysDataImport class, so look at that to mimic the user input or to see exactly what the process does.

    If you are looking for an alternative (automated) process, consider that when I do a data update of our environments (we have some environments that update every day, some that update a couple times a month, others whenever we feel it's needed, all 100% automated), we simply run a SQL script that updates or creates the necessary records after the backup is restored. The values are essentially hard-coded into the script, but since they don't change it's not a big deal. The downside of this technique compared to yours, is that if we have a user that needs access only to a non-production environment, they must first be in the production environment (disabled), and then the script updates the necessary values the next time the target environment is updated. Additionally, your system developers/administrators would need to be sure to communicate any changes that would need to be made to the script (for example, if new table values need to be changed or a new set of user permissions), which adds some overhead to the process.