diff --git a/DISCLAIMER.md b/DISCLAIMER.md new file mode 100644 index 0000000..78b27bf --- /dev/null +++ b/DISCLAIMER.md @@ -0,0 +1 @@ +These Project development objects are not managed or delivered or intended for future inclusion as a standard component of the SAP Software. Therefore, at Project closure, these Project development objects will not include any further support services, defect resolution, maintenance, or upgrades or in any way be within scope of SAP support obligations for licensed SAP Software. Licensee is solely responsible for supporting such objects. SAP does not assure the compatibility of such objects with future releases of SAP Software or other SAP solutions. diff --git a/LICENSE b/LICENSE index 261eeb9..be36d55 100644 --- a/LICENSE +++ b/LICENSE @@ -1,201 +1,202 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright (c) 2018 SAP SE or an SAP affiliate company. All rights reserved. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/README.md b/README.md index d08b153..92352a9 100644 --- a/README.md +++ b/README.md @@ -1,37 +1,87 @@ -# SAP Repository Template - -Default templates for SAP open source repositories, including LICENSE, .reuse/dep5, Code of Conduct, etc... All repositories on github.com/SAP will be created based on this template. - -## To-Do - -In case you are the maintainer of a new SAP open source project, these are the steps to do with the template files: - -- Check if the default license (Apache 2.0) also applies to your project. A license change should only be required in exceptional cases. If this is the case, please change the [license file](LICENSE). -- Enter the correct metadata for the REUSE tool. See our [wiki page](https://wiki.wdf.sap.corp/wiki/display/ospodocs/Using+the+Reuse+Tool+of+FSFE+for+Copyright+and+License+Information) for details how to do it. You can find an initial .reuse/dep5 file to build on. Please replace the parts inside the single angle quotation marks < > by the specific information for your repository and be sure to run the REUSE tool to validate that the metadata is correct. -- Adjust the contribution guidelines (e.g. add coding style guidelines, pull request checklists, different license if needed etc.) -- Add information about your project to this README (name, description, requirements etc). Especially take care for the placeholders - those ones need to be replaced with your project name. See the sections below the horizontal line and [our guidelines on our wiki page](https://wiki.wdf.sap.corp/wiki/display/ospodocs/Guidelines+for+README.md+file) what is required and recommended. -- Remove all content in this README above and including the horizontal line ;) - -*** - -# Our new open source project - -## About this project - -*Insert a short description of your project here...* - -## Requirements and Setup - -*Insert a short description what is required to get your project running...* - -## Support, Feedback, Contributing - -This project is open to feature requests/suggestions, bug reports etc. via [GitHub issues](https://github.com/SAP//issues). Contribution and feedback are encouraged and always welcome. For more information about how to contribute, the project structure, as well as additional contribution information, see our [Contribution Guidelines](CONTRIBUTING.md). - -## Code of Conduct - -We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone. By participating in this project, you agree to abide by its [Code of Conduct](CODE_OF_CONDUCT.md) at all times. - -## Licensing - -Copyright (20xx-)20xx SAP SE or an SAP affiliate company and contributors. Please see our [LICENSE](LICENSE) for copyright and license information. Detailed information including third-party components and their licensing/copyright information is available [via the REUSE tool](https://api.reuse.software/info/github.com/SAP/). +# SAP Commerce DB Sync + +[![REUSE status](https://api.reuse.software/badge/github.com/SAP-samples/commerce-migration-toolkit)](https://api.reuse.software/info/github.com/SAP-samples/commerce-migration-toolkit) + +SAP Commerce DB Sync performs table-to-table replication in single-directionally manner between two SAP Commerce instances (onPrem to Cloud) or between SAP Commerce and an external database. + +SAP Commerce DB Sync is implemented as SAP Commerce extensions and it does not require any third-party ETL. + +There are two main use cases: +* __Replicate data across an external database__: you can push data regularly in batch mode through a Commerce Cloud cronjob and synchronize to an external database. A typical use case is for analytics and reporting purpose when you need direct JDBC access to the database to run analytic jobs. +* __Data migration__: paired with the self-service media process described on [this CXWorks article](https://www.sap.com/cxworks/article/2589632453/migrate_to_sap_commerce_cloud_migrate_media_with_azcopy), it allows to self-service a one-shot data migration from the on-premise SAP Commerce environment to a SAP Commerce Cloud subscription. + +# Getting started + +* [User Guide for Data Replication](docs/user/USER-GUIDE-DATA-REPLICATION.md) Go through the details about Data replication between SAP Commerce Cloud and an external database. +* [User Guide for Data Migration](docs/user/USER-GUIDE-DATA-MIGRATION.md) When ready to start the migration activities, follow the instructions in the User Guide to trigger the data migration. +* [Configuration Guide](docs/configuration/CONFIGURATION-GUIDE.md) The extensions ship with a default configuration that may need to be adjusted depending on the desired behaviour. This guide explains how different features and behaviours can be configured. +* [Security Guide](docs/security/SECURITY-GUIDE.md) A data migration typically features sensitive data and uses delicate system access. Make sure you have read the Security Guide before you proceed with any migration activities and thereby acknowledge the security recommendations stated in the guide. +* [Performance Guide](docs/performance/PERFORMANCE-GUIDE.md) Performance is crucial for any data migration, not only for large databases but also generally to reduce the time of the cut-over window. The performance guide explains the basic concept of performance tuning and also provides benchmarks that will give you an impression of how to estimate the cutover time window. +* [Developer Guide](docs/developer/DEVELOPER-GUIDE.md) If you want to contribute please read this guide. +* [Troubleshooting Guide](docs/troubleshooting/TROUBLESHOOTING-GUIDE.md) A collection of common problems and how to tackle them. + +# Features Overview + +* Database Connectivity + * Multipe supported databases: Oracle, MySQL, HANA, MSSQL + * UI based connection validation +* Schema Differences + * UI based schema differences detector + * Automated target schema adaption + * Table creation / removal + * Column creation / removal + * Configurable behaviour +* Data Copy + * UI based copy trigger + * Configurable target table truncation + * Configurable index disabling + * Read/write batching with configurable sizes + * Copy parallelization + * Cluster awareness + * Column exclusions + * Table exclusions/inclusions + * Incremental mode (delta) + * Custom tables + * Staged approach using table prefix +* Reporting / Audit + * Automated reporting for schema changes + * Automated reporting for copy processes + * Stored on blob storage + * Logging of all actions triggered from the UI + +# Compatibility + + * SAP Commerce (>=1811) + * Tested with source databases: + * Azure SQL + * MySQL (5.7) + * Oracle (XE 11g) + * HANA (express 2.0) and HANA Cloud + * Tested with target databases: + * Azure SQL + * Oracle (XE 11g) + * HANA (express 2.0) and HANA Cloud + +# Performance + +Commerce DB Sync has been built to offer reasonable performance with large amount of data using the following design: +* Table to table replication using JDBC (low level) +* Selection of tables so we do not need a full synchronization in particular for large technical table (task logs, audit logs...)​ +* Multi-threaded and can manage multiple tables at the same time ​ +* Using UPSERT (INSERT/UPDATE) +* Use read replica Commerce database as a source database + +# Demo Video +Here is a video that presents how to use SAP Commerce DB sync (formerly known as CMT) for data migration from onPrem to Cloud: + https://sapvideoa35699dc5.hana.ondemand.com/?entry_id=1_gxduwrl3 + +# How to Obtain Support + +This repository is provided "as-is"; no support is available. + +Find more information about SAP Commerce Cloud Setup on our [help site](https://help.sap.com/viewer/product/SAP_COMMERCE_CLOUD_PUBLIC_CLOUD/LATEST/en-US). + +With regards Commerce DB Sync, access to the database for customers is and will not be possible in the future and SAP does not provide any additional support on Commerce DB Sync in particular. Support can be bought as paid engagement from SAP Consulting only. + +# License +Copyright (c) 2022 SAP SE or an SAP affiliate company. All rights reserved. This project is licensed under the Apache Software License, version 2.0 except as noted otherwise in the [LICENSE file](LICENSE). diff --git a/commercedbsync/.classpath b/commercedbsync/.classpath new file mode 100644 index 0000000..f37cfe7 --- /dev/null +++ b/commercedbsync/.classpath @@ -0,0 +1,15 @@ + + + + + + + + + + + + + + + diff --git a/commercedbsync/.springBeans b/commercedbsync/.springBeans new file mode 100644 index 0000000..e476d04 --- /dev/null +++ b/commercedbsync/.springBeans @@ -0,0 +1,15 @@ + + + 1 + + + + + + + resources/commercedbsync-spring.xml + web/webroot/WEB-INF/commercedbsync-web-spring.xml + + + + diff --git a/commercedbsync/buildcallbacks.xml b/commercedbsync/buildcallbacks.xml new file mode 100644 index 0000000..c723961 --- /dev/null +++ b/commercedbsync/buildcallbacks.xml @@ -0,0 +1,161 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + PATCHING azurecloudserver.jar to enable configurable fake tenants in AzureCloudUtils + + + + + + + + + + + + ${ext.azurecloud.path}/bin/azurecloudserver.jar doesn't exist. Cannot patch AzureCloudUtils to + enable fake tenants! + + + + + + + + + + + diff --git a/commercedbsync/extensioninfo.xml b/commercedbsync/extensioninfo.xml new file mode 100644 index 0000000..4b1ef33 --- /dev/null +++ b/commercedbsync/extensioninfo.xml @@ -0,0 +1,16 @@ + + + + + + + + + + + + diff --git a/commercedbsync/external-dependencies.xml b/commercedbsync/external-dependencies.xml new file mode 100644 index 0000000..4abd5ae --- /dev/null +++ b/commercedbsync/external-dependencies.xml @@ -0,0 +1,52 @@ + + + 4.0.0 + de.hybris.platform + commercedbsync + 6.7.0.0-RC19 + + jar + + + + com.google.code.gson + gson + 2.8.6 + + + com.google.guava + guava + 28.0-jre + + + org.apache.commons + commons-dbcp2 + 2.7.0 + + + com.microsoft.azure + azure-storage + 8.1.0 + + + com.zaxxer + HikariCP + 3.4.5 + + + com.github.freva + ascii-table + 1.1.0 + + + com.fasterxml.jackson.datatype + jackson-datatype-jsr310 + 2.13.3 + + + diff --git a/commercedbsync/project.properties b/commercedbsync/project.properties new file mode 100644 index 0000000..bc93ebb --- /dev/null +++ b/commercedbsync/project.properties @@ -0,0 +1,112 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# +commercedbsync.application-context=commercedbsync-spring.xml +installed.tenants= +task.engine.loadonstartup=false +solrfacetsearch.solrClientPool.checkInterval=0 +#backoffice.cockpitng.reset.scope=widgets,cockpitConfig +#backoffice.cockpitng.reset.triggers=start,login +################################ +# Migration specific properties +################################ +migration.ds.source.db.driver= +migration.ds.source.db.url= +migration.ds.source.db.username= +migration.ds.source.db.password= +migration.ds.source.db.tableprefix= +migration.ds.source.db.schema= +migration.ds.source.db.typesystemname=DEFAULT +migration.ds.source.db.typesystemsuffix= +migration.ds.source.db.connection.removeabandoned=true +migration.ds.source.db.connection.pool.size.idle.min=${db.pool.minIdle} +migration.ds.source.db.connection.pool.size.idle.max=${db.pool.maxIdle} +migration.ds.source.db.connection.pool.size.active.max=${db.pool.maxActive} +migration.ds.target.db.driver=${db.driver} +migration.ds.target.db.url=${db.url} +migration.ds.target.db.username=${db.username} +migration.ds.target.db.password=${db.password} +migration.ds.target.db.tableprefix=${db.tableprefix} +migration.ds.target.db.catalog= +migration.ds.target.db.schema=dbo +migration.ds.target.db.typesystemname=DEFAULT +migration.ds.target.db.typesystemsuffix= +migration.ds.target.db.connection.removeabandoned=true +migration.ds.target.db.connection.pool.size.idle.min=${db.pool.minIdle} +migration.ds.target.db.connection.pool.size.idle.max=${db.pool.maxIdle} +migration.ds.target.db.connection.pool.size.active.max=${db.pool.maxActive} +migration.ds.target.db.max.stage.migrations=5 +#triggered by updatesystem process or manually by hac +migration.trigger.updatesystem=false +# Schema migration section - parameters for copying schema from source to target +migration.schema.enabled=true +migration.schema.target.tables.add.enabled=true +migration.schema.target.tables.remove.enabled=false +migration.schema.target.columns.add.enabled=true +migration.schema.target.columns.remove.enabled=true +# automatically trigger schema migrator before data copy process is started +migration.schema.autotrigger.enabled=false +# the number of rows read per iteration +migration.data.reader.batchsize=1000 +# delete rows in target table before inserting new records +migration.data.truncate.enabled=true +# These tables will not be emptied before records are inserted +migration.data.truncate.excluded= +# maximum number of writer workers per table that can be executed in parallel within a single node in the cluster +migration.data.workers.writer.maxtasks=10 +# maximum number of reader workers per table that can be executed in parallel within a single node in the cluster +migration.data.workers.reader.maxtasks=3 +# max retry attempts of a worker in case there is a problem +migration.data.workers.retryattempts=0 +# maximum number of table that can be copied in parallel within a single node in the cluster +migration.data.maxparalleltablecopy=2 +# ignores data insertion errors and continues to the next records +migration.data.failonerror.enabled=true +# columns to be excluded. format: migration.data.columns.excluded.= +migration.data.columns.excluded.attributedescriptors= +migration.data.columns.nullify.attributedescriptors= +#remove all indices +migration.data.indices.drop.enabled=false +#disable indices during migration +migration.data.indices.disable.enabled=false +#if empty, disable indices on all tables. If table specified, only disable for this one. +migration.data.indices.disable.included= +#flag to enable the migration of audit tables +migration.data.tables.audit.enabled=true +#custom tables to migrate (use comma-separated list) +migration.data.tables.custom= +#tables to exclude (use table names name without prefix) +migration.data.tables.excluded=SYSTEMINIT,StoredHttpSessions +#tables to include (use table names name without prefix) +migration.data.tables.included= +migration.cluster.enabled=false +#enable the incremental database migration. +migration.data.incremental.enabled=false +#Only these tables will be taken into account for incremental migration. +migration.data.incremental.tables= +#The timestamp in ISO-8601 ISO_ZONED_DATE_TIME format. Records created or modified after this timestamp will be copied only. +migration.data.incremental.timestamp= +#EXPERIMENTAL: Enable bulk copy for better performance +migration.data.bulkcopy.enabled=false +migration.data.pipe.timeout=7200 +migration.data.pipe.capacity=100 +# No activity? -> migration aborted and marked as stalled +migration.stalled.timeout=7200 +migration.data.timeout=60 +migration.data.report.connectionstring=${media.globalSettings.cloudAzureBlobStorageStrategy.connection} +# Properties that will be masked in the report +migration.properties.masked=migration.data.report.connectionstring,migration.ds.source.db.password,migration.ds.target.db.password +migration.locale.default=en-US +# Enhanced Logging +log4j2.appender.migrationAppender.type=Console +log4j2.appender.migrationAppender.name=MigrationAppender +log4j2.appender.migrationAppender.layout.type=PatternLayout +log4j2.appender.migrationAppender.layout.pattern=%-5p [%t] [%c{1}] %X{migrationID,pipeline,clusterID} %m%n +log4j2.logger.migrationToolkit.name=com.sap.cx.boosters.commercedbsync +log4j2.logger.migrationToolkit.level=INFO +log4j2.logger.migrationToolkit.appenderRef.migration.ref=MigrationAppender +log4j2.logger.migrationToolkit.additivity=false + + diff --git a/commercedbsync/resources/commercedbsync-beans.xml b/commercedbsync/resources/commercedbsync-beans.xml new file mode 100644 index 0000000..b553e97 --- /dev/null +++ b/commercedbsync/resources/commercedbsync-beans.xml @@ -0,0 +1,148 @@ + + + + + + + RUNNING + PROCESSED + COMPLETED + ABORTED + STALLED + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + No prefix, no type system suffix + + + No prefix, with type system suffix + + + With prefix, with type system suffix + + + With prefix, with type system suffix, no additional suffix + + + I.e, LP tables + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/commercedbsync/resources/commercedbsync-items.xml b/commercedbsync/resources/commercedbsync-items.xml new file mode 100644 index 0000000..01def0a --- /dev/null +++ b/commercedbsync/resources/commercedbsync-items.xml @@ -0,0 +1,111 @@ + + + + + + + + + + + + + + + + + + + + + + List of table included for the migration + + + + java.lang.Boolean.FALSE + + + + + + + automatically trigger schema migrator before data copy process is started + + + false + + + delete rows in target table before inserting new records + + + false + + + + + + Cronjob For Incremental Migration. + + + + Last Executed Incremental migration Timestamp + + + + + + + + + Cronjob For full Migration. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/commercedbsync/resources/commercedbsync-spring.xml b/commercedbsync/resources/commercedbsync-spring.xml new file mode 100644 index 0000000..eac433e --- /dev/null +++ b/commercedbsync/resources/commercedbsync-spring.xml @@ -0,0 +1,312 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/commercedbsync/resources/commercedbsync/dummy.txt b/commercedbsync/resources/commercedbsync/dummy.txt new file mode 100644 index 0000000..e69de29 diff --git a/commercedbsync/resources/commercedbsync/sap-hybris-platform.png b/commercedbsync/resources/commercedbsync/sap-hybris-platform.png new file mode 100644 index 0000000..3984ada Binary files /dev/null and b/commercedbsync/resources/commercedbsync/sap-hybris-platform.png differ diff --git a/commercedbsync/resources/groovy/MigrationSummaryScript.groovy b/commercedbsync/resources/groovy/MigrationSummaryScript.groovy new file mode 100644 index 0000000..ab05740 --- /dev/null +++ b/commercedbsync/resources/groovy/MigrationSummaryScript.groovy @@ -0,0 +1,59 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package groovy + +import de.hybris.platform.util.Config +import org.apache.commons.lang.StringUtils + +import java.util.stream.Collectors + +def result = generateMigrationSummary(migrationContext) +println result +return result + +def generateMigrationSummary(context) { + StringBuilder sb = new StringBuilder(); + try { + final String sourcePrefix = context.getDataSourceRepository().getDataSourceConfiguration().getTablePrefix(); + final String targetPrefix = context.getDataTargetRepository().getDataSourceConfiguration().getTablePrefix(); + final String dbPrefix = Config.getString("db.tableprefix", ""); + final Set sourceSet = migrationContext.getDataSourceRepository().getAllTableNames() + .stream() + .map({ tableName -> tableName.replace(sourcePrefix, "") }) + .collect(Collectors.toSet()); + + final Set targetSet = migrationContext.getDataTargetRepository().getAllTableNames() + sb.append("------------").append("\n") + sb.append("All tables: ").append(sourceSet.size() + targetSet.size()).append("\n") + sb.append("Source tables: ").append(sourceSet.size()).append("\n") + sb.append("Target tables: ").append(targetSet.size()).append("\n") + sb.append("------------").append("\n") + sb.append("Source prefix: ").append(sourcePrefix).append("\n") + sb.append("Target prefix: ").append(targetPrefix).append("\n") + sb.append("DB prefix: ").append(dbPrefix).append("\n") + sb.append("------------").append("\n") + sb.append("Migration Type: ").append("\n") + sb.append(StringUtils.isNotEmpty(dbPrefix) && + StringUtils.isNotEmpty(targetPrefix) && !StringUtils.equalsIgnoreCase(dbPrefix, targetPrefix) ? "STAGED" : "DIRECT").append("\n") + sb.append("------------").append("\n") + sb.append("Found prefixes:").append("\n") + + Map prefixes = new HashMap<>() + targetSet.forEach({ tableName -> + String srcTable = schemaDifferenceService.findCorrespondingSrcTable(sourceSet, tableName); + String prefix = tableName.replace(srcTable, ""); + prefixes.put(prefix, targetSet.stream().filter({ e -> e.startsWith(prefix) }).count()); + }); + prefixes.forEach({ k, v -> sb.append("Prefix: ").append(k).append(" number of tables: ").append(v).append("\n") }); + sb.append("------------").append("\n"); + + } catch (Exception e) { + e.printStackTrace(); + } + return sb.toString(); +} + diff --git a/commercedbsync/resources/groovy/ddlaltercreate.groovy b/commercedbsync/resources/groovy/ddlaltercreate.groovy new file mode 100644 index 0000000..4df0f40 --- /dev/null +++ b/commercedbsync/resources/groovy/ddlaltercreate.groovy @@ -0,0 +1,73 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +// Parameters +queryToRun = 'select p_streetname from addresses where p_streetname like \'%Test%\'' +indexToCreateaddresses = 'CREATE INDEX addresses_ownerpkstring_ddl ON ADDRESSES(typepkstring, ownerpkstring) ONLINE;' +indexToDropAddresses = 'drop index addresses_ownerpkstring_ddl ONLINE;' +AlterAddColumnAddressesQuery = 'ALTER TABLE ADDRESSES ADD (TEST%s BIGINT) ONLINE;' +AlterRemoveColumnAddresses = 'ALTER TABLE ADDRESSES DROP (TEST%s) ONLINE;' + +queryLoopCount = 600 +threadSize = 1 +threadPoolSize = 1 // Long Running Query Test + +import java.util.concurrent.Callable +import java.util.concurrent.TimeUnit +import java.util.concurrent.Executors +import groovy.time.TimeCategory +import de.hybris.platform.core.Registry +import de.hybris.platform.jalo.JaloSession +import de.hybris.platform.core.Tenant + +queryTasks = [] +Tenant currentTenant = Registry.getCurrentTenant(); +// Create Callable Threads +1.upto(threadSize) { index -> + // Create a database connection within each Thread + queryTasks << { + def totalTime1 = 0 + def totalTime2 = 0 + def indexCreation = true; + // Each thread runs a thread/loop specific template query X times + 1.upto(queryLoopCount) { loopIndex -> + + try { + Registry.setCurrentTenant(currentTenant); + JaloSession.getCurrentSession().activate(); + + start = new Date() + totalTime2 = 0 + String AlterAddColumnAddresses = String.format(AlterRemoveColumnAddresses ,loopIndex); + println(AlterAddColumnAddresses) + jdbcTemplate.execute(AlterAddColumnAddresses) + stop = new Date() + totalTime2 += TimeCategory.minus(stop, start).toMilliseconds() + println "Table Column Creation AlterAddColumnAddresses loop ${loopIndex} totalTime(ms) ${totalTime2}" + // Thread.sleep(5000); + + } finally { + JaloSession.getCurrentSession().close(); + Registry.unsetCurrentTenant(); + } + } + // Return average as the result of the Callable + totalTime1 + totalTime2 / queryLoopCount + } as Callable +} + +executorService = Executors.newFixedThreadPool(threadPoolSize) +println "Test started at ${new Date()}" +results = executorService.invokeAll(queryTasks) +totalAverage = 0 +results.eachWithIndex { it, index -> + totalAverage += it.get() + println "$index --> ${it.get()}" +} +println "Total Average --> ${totalAverage / threadSize}" +println "Test finished at ${new Date()}" +executorService.shutdown() +executorService.awaitTermination(200, TimeUnit.SECONDS) \ No newline at end of file diff --git a/commercedbsync/resources/impex/essentialdata-commercemigration-jobs.impex b/commercedbsync/resources/impex/essentialdata-commercemigration-jobs.impex new file mode 100644 index 0000000..3a4e356 --- /dev/null +++ b/commercedbsync/resources/impex/essentialdata-commercemigration-jobs.impex @@ -0,0 +1,27 @@ + +INSERT_UPDATE ServicelayerJob;code[unique=true];springId[unique=true] +;incrementalMigrationJob;incrementalMigrationJob +;fullMigrationJob;fullMigrationJob + +# Update details for incremental migration +INSERT_UPDATE IncrementalMigrationCronJob;code[unique=true];active;job(code)[default=incrementalMigrationJob];sessionLanguage(isoCode)[default=en] +;incrementalMigrationJob;true; + +INSERT_UPDATE IncrementalMigrationCronJob;code[unique=true];migrationItems +#% afterEach: impex.getLastImportedItem().setActivationTime(new Date(System.currentTimeMillis() - 3600 * 1000)); +;incrementalMigrationJob;PAYMENTMODES,ADDRESSES,users,CAT2PRODREL,CONSIGNMENTS,ORDERS + +INSERT_UPDATE Trigger;cronjob(code)[unique=true];cronExpression +#% afterEach: impex.getLastImportedItem().setLastStartTime(new Date(System.currentTimeMillis() - 3600 * 1000)); +;incrementalMigrationJob; 0 0/1 * * * ? + +INSERT_UPDATE FullMigrationCronJob;code[unique=true];active;job(code)[default=fullMigrationJob];sessionLanguage(isoCode)[default=en] +;fullMigrationJob;true; + +INSERT_UPDATE FullMigrationCronJob;code[unique=true];truncateEnabled;migrationItems +;fullMigrationJob;true;PAYMENTMODES,products + +INSERT_UPDATE Trigger;cronjob(code)[unique=true];cronExpression +#% afterEach: impex.getLastImportedItem().setActivationTime(new Date(System.currentTimeMillis() - 3600 * 1000)); +;fullMigrationJob; 0 0 0 * * ? + diff --git a/commercedbsync/resources/localization/commercedbsync-locales_de.properties b/commercedbsync/resources/localization/commercedbsync-locales_de.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsync/resources/localization/commercedbsync-locales_de.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsync/resources/localization/commercedbsync-locales_en.properties b/commercedbsync/resources/localization/commercedbsync-locales_en.properties new file mode 100644 index 0000000..3c1263a --- /dev/null +++ b/commercedbsync/resources/localization/commercedbsync-locales_en.properties @@ -0,0 +1,18 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + +type.MigrationCronJob.truncateEnabled.name=Truncate Enabled +type.MigrationCronJob.truncateEnabled.description= + +type.MigrationCronJob.schemaAutotrigger.name=Schema Auto Trigger Enabled +type.MigrationCronJob.schemaAutotrigger.description= + +type.MigrationCronJob.lastStartTime.name=Last Start time For Incremental Job +type.MigrationCronJob.lastStartTime.description= + +type.MigrationCronJob.migrationItems.name=Migration Tables +type.MigrationCronJob.migrationItems.description= + diff --git a/commercedbsync/resources/localization/commercedbsync-locales_es.properties b/commercedbsync/resources/localization/commercedbsync-locales_es.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsync/resources/localization/commercedbsync-locales_es.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsync/resources/localization/commercedbsync-locales_fr.properties b/commercedbsync/resources/localization/commercedbsync-locales_fr.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsync/resources/localization/commercedbsync-locales_fr.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsync/resources/localization/commercedbsync-locales_it.properties b/commercedbsync/resources/localization/commercedbsync-locales_it.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsync/resources/localization/commercedbsync-locales_it.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsync/resources/localization/commercedbsync-locales_ja.properties b/commercedbsync/resources/localization/commercedbsync-locales_ja.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsync/resources/localization/commercedbsync-locales_ja.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsync/resources/localization/commercedbsync-locales_ko.properties b/commercedbsync/resources/localization/commercedbsync-locales_ko.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsync/resources/localization/commercedbsync-locales_ko.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsync/resources/localization/commercedbsync-locales_pt.properties b/commercedbsync/resources/localization/commercedbsync-locales_pt.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsync/resources/localization/commercedbsync-locales_pt.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsync/resources/localization/commercedbsync-locales_ru.properties b/commercedbsync/resources/localization/commercedbsync-locales_ru.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsync/resources/localization/commercedbsync-locales_ru.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsync/resources/localization/commercedbsync-locales_zh.properties b/commercedbsync/resources/localization/commercedbsync-locales_zh.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsync/resources/localization/commercedbsync-locales_zh.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsync/resources/sql/createSchedulerTables.sql b/commercedbsync/resources/sql/createSchedulerTables.sql new file mode 100644 index 0000000..bad4a15 --- /dev/null +++ b/commercedbsync/resources/sql/createSchedulerTables.sql @@ -0,0 +1,105 @@ + +DROP TABLE IF EXISTS MIGRATIONTOOLKIT_TABLECOPYTASKS; + +CREATE TABLE MIGRATIONTOOLKIT_TABLECOPYTASKS ( + targetnodeId int NOT NULL, + migrationId NVARCHAR(255) NOT NULL, + pipelinename NVARCHAR(255) NOT NULL, + sourcetablename NVARCHAR(255) NOT NULL, + targettablename NVARCHAR(255) NOT NULL, + columnmap NVARCHAR(MAX) NULL, + duration NVARCHAR (255) NULL, + sourcerowcount int NOT NULL DEFAULT 0, + targetrowcount int NOT NULL DEFAULT 0, + failure char(1) NOT NULL DEFAULT '0', + error NVARCHAR(MAX) NULL, + published char(1) NOT NULL DEFAULT '0', + lastupdate DATETIME2 NOT NULL DEFAULT '0001-01-01 00:00:00', + avgwriterrowthroughput numeric(10,2) NULL DEFAULT 0, + avgreaderrowthroughput numeric(10,2) NULL DEFAULT 0, + durationinseconds numeric(10,2) NULL DEFAULT 0, + PRIMARY KEY (migrationid, targetnodeid, pipelinename) +); + +DROP TABLE IF EXISTS MIGRATIONTOOLKIT_TABLECOPYSTATUS; + +CREATE TABLE MIGRATIONTOOLKIT_TABLECOPYSTATUS ( + migrationId NVARCHAR(255) NOT NULL, + startAt datetime2 NOT NULL DEFAULT GETUTCDATE(), + endAt datetime2, + lastUpdate datetime2, + total int NOT NULL DEFAULT 0, + completed int NOT NULL DEFAULT 0, + failed int NOT NULL DEFAULT 0, + status NVARCHAR(255) NOT NULL DEFAULT 'RUNNING' + PRIMARY KEY (migrationid) +); + +IF OBJECT_ID ('MIGRATIONTOOLKIT_TABLECOPYSTATUS_Update','TR') IS NOT NULL + DROP TRIGGER MIGRATIONTOOLKIT_TABLECOPYSTATUS_Update; + +CREATE TRIGGER MIGRATIONTOOLKIT_TABLECOPYSTATUS_Update +ON MIGRATIONTOOLKIT_TABLECOPYTASKS +AFTER INSERT, UPDATE +AS +BEGIN + DECLARE @relevant_count integer = 0 + SET NOCOUNT ON + /* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + + -- latest update overall = latest update timestamp of updated tasks + UPDATE s + SET s.lastUpdate = t.latestUpdate + FROM MIGRATIONTOOLKIT_TABLECOPYSTATUS s + INNER JOIN ( + SELECT migrationId, MAX(lastUpdate) AS latestUpdate + FROM inserted + GROUP BY migrationId + ) AS t + ON s.migrationId = t.migrationId + + SELECT @relevant_count = COUNT(pipelinename) + FROM inserted + WHERE failure = '1' + OR duration IS NOT NULL + + IF @relevant_count > 0 + BEGIN + -- updated completed count when tasks completed + UPDATE s + SET s.completed = t.completed + FROM MIGRATIONTOOLKIT_TABLECOPYSTATUS s + INNER JOIN ( + SELECT migrationId, COUNT(pipelinename) AS completed + FROM MIGRATIONTOOLKIT_TABLECOPYTASKS + WHERE duration IS NOT NULL + GROUP BY migrationId + ) AS t + ON s.migrationId = t.migrationId + -- update failed count when tasks failed + UPDATE s + SET s.failed = t.failed + FROM MIGRATIONTOOLKIT_TABLECOPYSTATUS s + INNER JOIN ( + SELECT migrationId, COUNT(pipelinename) AS failed + FROM MIGRATIONTOOLKIT_TABLECOPYTASKS + WHERE failure = '1' + GROUP BY migrationId + ) AS t + ON s.migrationId = t.migrationId + + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS + SET endAt = GETUTCDATE() + WHERE total = completed + AND endAt IS NULL + + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS + SET status = 'PROCESSED' + WHERE status = 'RUNNING' + AND total = completed + END +END; diff --git a/commercedbsync/resources/sql/createSchedulerTablesHana.sql b/commercedbsync/resources/sql/createSchedulerTablesHana.sql new file mode 100644 index 0000000..bf15d86 --- /dev/null +++ b/commercedbsync/resources/sql/createSchedulerTablesHana.sql @@ -0,0 +1,340 @@ + + +CREATE OR REPLACE PROCEDURE MIGRATION_PROCEDURE (IN tablename VARCHAR(1000)) + LANGUAGE SQLSCRIPT AS +BEGIN + DECLARE found INT=0; +SELECT count(*) INTO found FROM OBJECTS WHERE OBJECT_TYPE='TABLE' AND OBJECT_NAME=:tablename; +IF tablename = 'MIGRATIONTOOLKIT_TABLECOPYTASKS' AND :found > 0 + THEN +DROP TABLE MIGRATIONTOOLKIT_TABLECOPYTASKS; +END IF; + +IF tablename = 'MIGRATIONTOOLKIT_TABLECOPYSTATUS' AND :found > 0 + THEN +DROP TABLE MIGRATIONTOOLKIT_TABLECOPYSTATUS; +END IF; +END; +# +CALL MIGRATION_PROCEDURE('MIGRATIONTOOLKIT_TABLECOPYTASKS'); +# + +CREATE TABLE MIGRATIONTOOLKIT_TABLECOPYTASKS ( + targetnodeId int NOT NULL, + migrationId NVARCHAR(255) NOT NULL, + pipelinename NVARCHAR(255) NOT NULL, + sourcetablename NVARCHAR(255) NOT NULL, + targettablename NVARCHAR(255) NOT NULL, + columnmap NVARCHAR(5000) NULL, + duration NVARCHAR (255) NULL, + sourcerowcount int NOT NULL DEFAULT 0, + targetrowcount int NOT NULL DEFAULT 0, + failure char(1) NOT NULL DEFAULT '0', + error NVARCHAR(5000) NULL, + published char(1) NOT NULL DEFAULT '0', + lastupdate Timestamp NOT NULL DEFAULT '0001-01-01 00:00:00', + avgwriterrowthroughput numeric(10,2) NULL DEFAULT 0, + avgreaderrowthroughput numeric(10,2) NULL DEFAULT 0, + durationinseconds numeric(10,2) NULL DEFAULT 0, + PRIMARY KEY (migrationid, targetnodeid, pipelinename) +); + +# + +CALL MIGRATION_PROCEDURE('MIGRATIONTOOLKIT_TABLECOPYSTATUS'); +# + +CREATE TABLE MIGRATIONTOOLKIT_TABLECOPYSTATUS ( + migrationId NVARCHAR(255) NOT NULL, + startAt Timestamp NOT NULL DEFAULT CURRENT_UTCDATE, + endAt Timestamp, + lastUpdate Timestamp, + total int NOT NULL DEFAULT 0, + completed int NOT NULL DEFAULT 0, + failed int NOT NULL DEFAULT 0, + status NVARCHAR(255) NOT NULL DEFAULT 'RUNNING', + PRIMARY KEY (migrationid) +); + +# + + +CREATE OR REPLACE TRIGGER MIGRATIONTOOLKIT_TABLECOPYSTATUS_Update_trigger +AFTER UPDATE + ON MIGRATIONTOOLKIT_TABLECOPYTASKS + REFERENCING OLD ROW AS old, NEW ROW AS new + FOR EACH ROW +BEGIN + /* ORIGSQL: PRAGMA AUTONOMOUS_TRANSACTION; */ + -- BEGIN AUTONOMOUS TRANSACTION + DECLARE var_pipeline_count DECIMAL(38,10); /* ORIGSQL: var_pipeline_count NUMBER ; */ + + /* ORIGSQL: CURSOR cur_count_pipeline IS select COUNT(pipelinename) countpipelines from MIGR(...) */ + DECLARE CURSOR cur_count_pipeline + FOR +SELECT /* ORIGSQL: SELECT COUNT(pipelinename) countpipelines from MIGRATIONTOOLKIT_TABLECOPYTASKS w(...) */ + COUNT(pipelinename) AS countpipelines +FROM + MIGRATIONTOOLKIT_TABLECOPYTASKS +WHERE + failure = '1' + OR duration IS NOT NULL; + +/* RESOLVE: Trigger declaration: Additional conversion may be required */ + +/* ORIGSQL: OPEN cur_count_pipeline; */ +OPEN cur_count_pipeline; + +/* ORIGSQL: FETCH cur_count_pipeline INTO var_pipeline_count; */ +FETCH cur_count_pipeline INTO var_pipeline_count; + +IF (:var_pipeline_count > 0) + THEN + -- completed count + /* ORIGSQL: UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST SET COMPLETED = NVL((SELECT count(*) (...) */ +UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST +SET + /* ORIGSQL: COMPLETED = */ + COMPLETED = IFNULL( /* ORIGSQL: NVL((SELECT count(*) FROM MIGRATIONTOOLKIT_TABLECOPYTASKS TK WHERE ST.migrationi(...) */ + ( + SELECT /* ORIGSQL: (SELECT COUNT(*) FROM MIGRATIONTOOLKIT_TABLECOPYTASKS TK WHERE ST.migrationid = (...) */ + COUNT(*) + FROM + MIGRATIONTOOLKIT_TABLECOPYTASKS TK + WHERE + ST.migrationid = TK.migrationid + AND duration IS NOT NULL + GROUP BY + migrationid + ) + ,0); + +-- failed count +/* ORIGSQL: UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST SET failed = NVL((SELECT count(*) FRO(...) */ +UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST +SET + /* ORIGSQL: failed = */ + failed = IFNULL( /* ORIGSQL: NVL((SELECT count(*) FROM MIGRATIONTOOLKIT_TABLECOPYTASKS TK WHERE ST.migrationi(...) */ + ( + SELECT /* ORIGSQL: (SELECT COUNT(*) FROM MIGRATIONTOOLKIT_TABLECOPYTASKS TK WHERE ST.migrationid = (...) */ + COUNT(*) + FROM + MIGRATIONTOOLKIT_TABLECOPYTASKS TK + WHERE + ST.migrationid = TK.migrationid + AND failure = '1' + GROUP BY + migrationid + ) + ,0); +END IF; + -- this takes care of THIS ROW, for which trigger is fired + IF /* ORIGSQL: IF UPDATING AND */ +:new.failure = '1' + AND :old.failure = '0' + THEN + /* ORIGSQL: UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST SET failed = failed + 1 WHERE migrati(...) */ +UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST +SET + /* ORIGSQL: failed = */ + failed = failed + 1 +WHERE + migrationid = :new.migrationid; + +--INSERT INTO EVENT_LOG_CMT (DESCRIPTION, COUNTS) VALUES ('Updating failed', 1); +END IF; + + -- this takes care of THIS ROW,l for which trigger is fired + IF /* ORIGSQL: IF UPDATING AND */ +:new.duration IS NOT NULL + AND :old.duration IS NULL + THEN + /* ORIGSQL: UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST SET completed = completed + 1 WHERE m(...) */ +UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST +SET + /* ORIGSQL: completed = */ + completed = completed + 1 +WHERE + migrationid = :new.migrationid + AND total > completed; + +--INSERT INTO EVENT_LOG_CMT (DESCRIPTION, COUNTS) VALUES ('Updating completed', 1); +END IF; + + -- this sQL is slightly diff from the SQL server one + /* ORIGSQL: UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS SET lastupdate = sys_extract_utc(systime(...) */ +UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS +SET + /* ORIGSQL: lastupdate = */ + lastupdate = CURRENT_UTCTIMESTAMP /* ORIGSQL: sys_extract_utc(systimestamp) */ +WHERE + migrationid = :new.migrationid; + +/* ORIGSQL: UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS SET endAt = sys_extract_utc(systimestamp(...) */ +UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS +SET + /* ORIGSQL: endAt = */ + endAt = CURRENT_UTCTIMESTAMP /* ORIGSQL: sys_extract_utc(systimestamp) */ +WHERE + total = completed + AND endAt IS NULL; + +/* ORIGSQL: UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS SET status = 'PROCESSED' WHERE status = (...) */ +UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS +SET + /* ORIGSQL: status = */ + status = 'PROCESSED' +WHERE + status = 'RUNNING' + AND total = completed; + +/* ORIGSQL: COMMIT; */ +/* RESOLVE: Statement 'COMMIT' not currently supported in HANA SQL trigger objects */ +-- COMMIT;; /* NOT CONVERTED! */ +-- END; +END; + +# + +CREATE OR REPLACE TRIGGER MIGRATIONTOOLKIT_TABLECOPYSTATUS_Insert_trigger +AFTER INSERT +ON MIGRATIONTOOLKIT_TABLECOPYTASKS +REFERENCING OLD ROW AS old, NEW ROW AS new +FOR EACH ROW +BEGIN + /* ORIGSQL: PRAGMA AUTONOMOUS_TRANSACTION; */ + -- BEGIN AUTONOMOUS TRANSACTION + DECLARE var_pipeline_count DECIMAL(38,10); /* ORIGSQL: var_pipeline_count NUMBER ; */ + + /* ORIGSQL: CURSOR cur_count_pipeline IS select COUNT(pipelinename) countpipelines from MIGR(...) */ + DECLARE CURSOR cur_count_pipeline + FOR +SELECT /* ORIGSQL: SELECT COUNT(pipelinename) countpipelines from MIGRATIONTOOLKIT_TABLECOPYTASKS w(...) */ + COUNT(pipelinename) AS countpipelines +FROM + MIGRATIONTOOLKIT_TABLECOPYTASKS +WHERE + failure = '1' + OR duration IS NOT NULL; + +/* RESOLVE: Trigger declaration: Additional conversion may be required */ + +/* ORIGSQL: OPEN cur_count_pipeline; */ +OPEN cur_count_pipeline; + +/* ORIGSQL: FETCH cur_count_pipeline INTO var_pipeline_count; */ +FETCH cur_count_pipeline INTO var_pipeline_count; + +IF (:var_pipeline_count > 0) + THEN + -- completed count + /* ORIGSQL: UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST SET COMPLETED = NVL((SELECT count(*) (...) */ +UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST +SET + /* ORIGSQL: COMPLETED = */ + COMPLETED = IFNULL( /* ORIGSQL: NVL((SELECT count(*) FROM MIGRATIONTOOLKIT_TABLECOPYTASKS TK WHERE ST.migrationi(...) */ + ( + SELECT /* ORIGSQL: (SELECT COUNT(*) FROM MIGRATIONTOOLKIT_TABLECOPYTASKS TK WHERE ST.migrationid = (...) */ + COUNT(*) + FROM + MIGRATIONTOOLKIT_TABLECOPYTASKS TK + WHERE + ST.migrationid = TK.migrationid + AND duration IS NOT NULL + GROUP BY + migrationid + ) + ,0); + +-- failed count +/* ORIGSQL: UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST SET failed = NVL((SELECT count(*) FRO(...) */ +UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST +SET + /* ORIGSQL: failed = */ + failed = IFNULL( /* ORIGSQL: NVL((SELECT count(*) FROM MIGRATIONTOOLKIT_TABLECOPYTASKS TK WHERE ST.migrationi(...) */ + ( + SELECT /* ORIGSQL: (SELECT COUNT(*) FROM MIGRATIONTOOLKIT_TABLECOPYTASKS TK WHERE ST.migrationid = (...) */ + COUNT(*) + FROM + MIGRATIONTOOLKIT_TABLECOPYTASKS TK + WHERE + ST.migrationid = TK.migrationid + AND failure = '1' + GROUP BY + migrationid + ) + ,0); +END IF; + + + -- this takes care of THIS ROW,l for which trigger is fired + IF /* ORIGSQL: IF INSERTING AND */ +:new.failure = '1' + THEN + /* ORIGSQL: UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST SET failed = failed + 1 WHERE migrati(...) */ +UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST +SET + /* ORIGSQL: failed = */ + failed = failed + 1 +WHERE + migrationid = :new.migrationid; + +--INSERT INTO EVENT_LOG_CMT (DESCRIPTION, COUNTS) VALUES ('INSERTING failed', 1); +END IF; + + -- this takes care of THIS ROW,l for which trigger is fired + IF /* ORIGSQL: IF INSERTING AND */ +:new.duration IS NOT NULL + THEN + /* ORIGSQL: UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST SET completed = completed + 1 WHERE m(...) */ +UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST +SET + /* ORIGSQL: completed = */ + completed = completed + 1 +WHERE + migrationid = :new.migrationid + AND total > completed; + +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +--INSERT INTO EVENT_LOG_CMT (DESCRIPTION, COUNTS) VALUES ('INSERTING completed', 1); +END IF; + + -- this sQL is slightly diff from the SQL server one + /* ORIGSQL: UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS SET lastupdate = sys_extract_utc(systime(...) */ +UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS +SET + /* ORIGSQL: lastupdate = */ + lastupdate = CURRENT_UTCTIMESTAMP /* ORIGSQL: sys_extract_utc(systimestamp) */ +WHERE + migrationid = :new.migrationid; + +/* ORIGSQL: UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS SET endAt = sys_extract_utc(systimestamp(...) */ +UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS +SET + /* ORIGSQL: endAt = */ + endAt = CURRENT_UTCTIMESTAMP /* ORIGSQL: sys_extract_utc(systimestamp) */ +WHERE + total = completed + AND endAt IS NULL; + +/* ORIGSQL: UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS SET status = 'PROCESSED' WHERE status = (...) */ +UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS +SET + /* ORIGSQL: status = */ + status = 'PROCESSED' +WHERE + status = 'RUNNING' + AND total = completed; + +/* ORIGSQL: COMMIT; */ +/* RESOLVE: Statement 'COMMIT' not currently supported in HANA SQL trigger objects */ +-- COMMIT;; /* NOT CONVERTED! */ +-- END; +END; + +# \ No newline at end of file diff --git a/commercedbsync/resources/sql/createSchedulerTablesOracle.sql b/commercedbsync/resources/sql/createSchedulerTablesOracle.sql new file mode 100644 index 0000000..7286422 --- /dev/null +++ b/commercedbsync/resources/sql/createSchedulerTablesOracle.sql @@ -0,0 +1,158 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + + +BEGIN + EXECUTE IMMEDIATE 'DROP TABLE MIGRATIONTOOLKIT_TABLECOPYTASKS'; +EXCEPTION + WHEN OTHERS THEN NULL; +END; +/ + + + +CREATE TABLE MIGRATIONTOOLKIT_TABLECOPYTASKS ( + targetnodeId number(10) NOT NULL, + migrationId NVARCHAR2(255) NOT NULL, + pipelinename NVARCHAR2(255) NOT NULL, + sourcetablename NVARCHAR2(255) NOT NULL, + targettablename NVARCHAR2(255) NOT NULL, + columnmap CLOB NULL, + duration NVARCHAR2 (255) NULL, + sourcerowcount number(10) DEFAULT 0 NOT NULL, + targetrowcount number(10) DEFAULT 0 NOT NULL, + failure char(1) DEFAULT '0' NOT NULL, + error CLOB NULL, + published char(1) DEFAULT '0' NOT NULL, + lastupdate Timestamp NOT NULL, + avgwriterrowthroughput number(10,2) DEFAULT 0 NULL, + avgreaderrowthroughput number(10,2) DEFAULT 0 NULL, + durationinseconds number(10,2) DEFAULT 0 NULL, + PRIMARY KEY (migrationid, targetnodeid, pipelinename) +) +/ + + + + +BEGIN + EXECUTE IMMEDIATE 'DROP TABLE MIGRATIONTOOLKIT_TABLECOPYSTATUS'; +EXCEPTION + WHEN OTHERS THEN NULL; +END; +/ + + +CREATE TABLE MIGRATIONTOOLKIT_TABLECOPYSTATUS ( + migrationId NVARCHAR2(255) NOT NULL, + startAt TimeStamp DEFAULT SYS_EXTRACT_UTC(SYSTIMESTAMP) NOT NULL, + endAt Timestamp, + lastUpdate Timestamp, + total number(10) DEFAULT 0 NOT NULL, + completed number(10) DEFAULT 0 NOT NULL, + failed number(10) DEFAULT 0 NOT NULL, + status NVARCHAR2(255) DEFAULT 'RUNNING' NOT NULL, + PRIMARY key(migrationid) +) +/ + + + + +CREATE OR REPLACE TRIGGER MIGRATIONTOOLKIT_TABLECOPYSTATUS_Update + AFTER INSERT OR UPDATE + ON MIGRATIONTOOLKIT_TABLECOPYTASKS + FOR EACH ROW +DECLARE + PRAGMA AUTONOMOUS_TRANSACTION; + + var_pipeline_count NUMBER ; + + CURSOR cur_count_pipeline + IS select count(pipelinename) countpipelines from MIGRATIONTOOLKIT_TABLECOPYTASKS where failure='1' OR duration is not NULL; + +BEGIN + + + OPEN cur_count_pipeline; + FETCH cur_count_pipeline INTO var_pipeline_count; + IF (var_pipeline_count > 0 ) THEN + -- completed count + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST SET COMPLETED = + NVL + ((SELECT count(*) FROM MIGRATIONTOOLKIT_TABLECOPYTASKS TK + WHERE + ST.migrationid = TK.migrationid + AND duration IS NOT NULL + GROUP BY migrationid + ),0); + + -- failed count + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST SET failed = + NVL + ((SELECT count(*) FROM MIGRATIONTOOLKIT_TABLECOPYTASKS TK + WHERE + ST.migrationid = TK.migrationid + AND failure='1' + GROUP BY migrationid + ),0); + + END IF; + -- this takes care of THIS ROW, for which trigger is fired + IF UPDATING AND :NEW.failure='1' AND :OLD.failure='0' THEN + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST SET failed = failed + 1 WHERE migrationid = :NEW.migrationid; + --INSERT INTO EVENT_LOG_CMT (DESCRIPTION, COUNTS) VALUES ('Updating failed', 1); + END IF; + + -- this takes care of THIS ROW,l for which trigger is fired + IF UPDATING AND :NEW.duration IS NOT NULL AND :OLD.duration IS NULL THEN + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST SET completed = completed + 1 WHERE migrationid = :NEW.migrationid; + --INSERT INTO EVENT_LOG_CMT (DESCRIPTION, COUNTS) VALUES ('Updating completed', 1); + END IF; + + -- this takes care of THIS ROW,l for which trigger is fired + IF INSERTING AND :NEW.failure='1' THEN + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST SET failed = failed + 1 WHERE migrationid = :NEW.migrationid; + --INSERT INTO EVENT_LOG_CMT (DESCRIPTION, COUNTS) VALUES ('INSERTING failed', 1); + END IF; + + -- this takes care of THIS ROW,l for which trigger is fired + IF INSERTING AND :NEW.duration IS NOT NULL THEN + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS ST SET completed = completed + 1 WHERE migrationid = :NEW.migrationid; + --INSERT INTO EVENT_LOG_CMT (DESCRIPTION, COUNTS) VALUES ('INSERTING completed', 1); + END IF; + + -- this sQL is slightly diff from the SQL server one + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS + SET lastupdate = sys_extract_utc(systimestamp) + WHERE migrationid = :NEW.migrationid; + + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS + SET endAt = sys_extract_utc(systimestamp) + WHERE total = completed + AND endAt IS NULL; + + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS + SET status = 'PROCESSED' + WHERE status = 'RUNNING' + AND total = completed; + COMMIT; +END; + +/ + diff --git a/commercedbsync/resources/sql/createSchedulerTablesPostGres.sql b/commercedbsync/resources/sql/createSchedulerTablesPostGres.sql new file mode 100644 index 0000000..451451b --- /dev/null +++ b/commercedbsync/resources/sql/createSchedulerTablesPostGres.sql @@ -0,0 +1,130 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + + + +DROP TABLE IF EXISTS MIGRATIONTOOLKIT_TABLECOPYTASKS; + +# + +CREATE TABLE MIGRATIONTOOLKIT_TABLECOPYTASKS ( + targetnodeId int NOT NULL, + migrationId VARCHAR(255) NOT NULL, + pipelinename VARCHAR(255) NOT NULL, + sourcetablename VARCHAR(255) NOT NULL, + targettablename VARCHAR(255) NOT NULL, + columnmap text NULL, + duration VARCHAR (255) NULL, + sourcerowcount int NOT NULL DEFAULT 0, + targetrowcount int NOT NULL DEFAULT 0, + failure char(1) NOT NULL DEFAULT '0', + error text NULL, + published char(1) NOT NULL DEFAULT '0', + lastupdate timestamp NOT NULL DEFAULT '0001-01-01 00:00:00', + avgwriterrowthroughput numeric(10,2) NULL DEFAULT 0, + avgreaderrowthroughput numeric(10,2) NULL DEFAULT 0, + durationinseconds numeric(10,2) NULL DEFAULT 0, + PRIMARY KEY (migrationid, targetnodeid, pipelinename) +); + +# + +DROP TABLE IF EXISTS MIGRATIONTOOLKIT_TABLECOPYSTATUS; + +# + +CREATE TABLE MIGRATIONTOOLKIT_TABLECOPYSTATUS ( + migrationId VARCHAR(255) NOT NULL, + startAt timestamp NOT NULL DEFAULT NOW(), + endAt timestamp, + lastUpdate timestamp, + total int NOT NULL DEFAULT 0, + completed int NOT NULL DEFAULT 0, + failed int NOT NULL DEFAULT 0, + status VARCHAR(255) NOT NULL DEFAULT 'RUNNING', + PRIMARY KEY (migrationid) +); + +# + +DROP TRIGGER IF EXISTS MIGRATIONTOOLKIT_TABLECOPYSTATUS_Update ON MIGRATIONTOOLKIT_TABLECOPYTASKS CASCADE; + +# + +DROP FUNCTION IF EXISTS MIGRATIONTOOLKIT_TABLECOPYSTATUS_proc; + +# + +CREATE FUNCTION MIGRATIONTOOLKIT_TABLECOPYSTATUS_proc() RETURNS trigger AS $$ + +DECLARE relevant_count integer default 0; +BEGIN + + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS AS s + SET lastUpdate = t.latestUpdate + FROM ( SELECT NEW.migrationId, MAX(NEW.lastUpdate) AS latestUpdate + GROUP BY migrationId + ) AS t + WHERE s.migrationId = t.migrationId; + + relevant_count = COUNT(NEW.pipelinename) + WHERE NEW.failure = '1' + OR NEW.duration IS NOT NULL; + + IF relevant_count > 0 then + -- updated completed count when tasks completed + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS AS s + SET completed = t.completed + FROM ( SELECT migrationId, COUNT(pipelinename) AS completed + FROM MIGRATIONTOOLKIT_TABLECOPYTASKS + WHERE duration IS NOT NULL + GROUP BY migrationId + ) AS t + WHERE s.migrationId = t.migrationId; + + -- update failed count when tasks failed + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS AS s + SET failed = t.failed + FROM ( SELECT migrationId, COUNT(pipelinename) AS failed + FROM MIGRATIONTOOLKIT_TABLECOPYTASKS + WHERE failure = '1' + GROUP BY migrationId + ) AS t + WHERE s.migrationId = t.migrationId; + + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS + SET endAt = NOW() + WHERE total = completed + AND endAt IS NULL; + + UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS + SET status = 'PROCESSED' + WHERE status = 'RUNNING' + AND total = completed; +END if; + RETURN NULL; +END; +$$ LANGUAGE plpgsql; + +# + +CREATE TRIGGER MIGRATIONTOOLKIT_TABLECOPYSTATUS_Update + AFTER INSERT OR UPDATE ON MIGRATIONTOOLKIT_TABLECOPYTASKS + FOR EACH ROW EXECUTE PROCEDURE MIGRATIONTOOLKIT_TABLECOPYSTATUS_proc(); + +# \ No newline at end of file diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/CommercedbsyncStandalone.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/CommercedbsyncStandalone.java new file mode 100644 index 0000000..6e3e587 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/CommercedbsyncStandalone.java @@ -0,0 +1,44 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync; + +import de.hybris.platform.core.Registry; +import de.hybris.platform.jalo.JaloSession; +import de.hybris.platform.util.RedeployUtilities; +import de.hybris.platform.util.Utilities; + + +/** + * Demonstration of how to write a standalone application that can be run directly from within eclipse or from the + * commandline.
+ * To run this from commandline, just use the following command:
+ * + * java -jar bootstrap/bin/ybootstrap.jar "new CommercedbsyncStandalone().run();" + * From eclipse, just run as Java Application. Note that you maybe need to add all other projects like + * ext-commerce, ext-pim to the Launch configuration classpath. + */ +public class CommercedbsyncStandalone { + /** + * Main class to be able to run it directly as a java program. + * + * @param args the arguments from commandline + */ + public static void main(final String[] args) { + new CommercedbsyncStandalone().run(); + } + + public void run() { + Registry.activateStandaloneMode(); + Registry.activateMasterTenant(); + + final JaloSession jaloSession = JaloSession.getCurrentSession(); + System.out.println("Session ID: " + jaloSession.getSessionID()); //NOPMD + System.out.println("User: " + jaloSession.getUser()); //NOPMD + Utilities.printAppInfo(); + + RedeployUtilities.shutdown(); + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/adapter/DataRepositoryAdapter.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/adapter/DataRepositoryAdapter.java new file mode 100644 index 0000000..e235c34 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/adapter/DataRepositoryAdapter.java @@ -0,0 +1,26 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.adapter; + +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.dataset.DataSet; +import com.sap.cx.boosters.commercedbsync.MarkersQueryDefinition; +import com.sap.cx.boosters.commercedbsync.OffsetQueryDefinition; +import com.sap.cx.boosters.commercedbsync.SeekQueryDefinition; + +public interface DataRepositoryAdapter { + long getRowCount(MigrationContext context, String table) throws Exception; + + DataSet getAll(MigrationContext context, String table) throws Exception; + + DataSet getBatchWithoutIdentifier(MigrationContext context, OffsetQueryDefinition queryDefinition) throws Exception; + + DataSet getBatchOrderedByColumn(MigrationContext context, SeekQueryDefinition queryDefinition) throws Exception; + + DataSet getBatchMarkersOrderedByColumn(MigrationContext context, MarkersQueryDefinition queryDefinition) throws Exception; + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/adapter/impl/ContextualDataRepositoryAdapter.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/adapter/impl/ContextualDataRepositoryAdapter.java new file mode 100644 index 0000000..b0aae83 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/adapter/impl/ContextualDataRepositoryAdapter.java @@ -0,0 +1,89 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.adapter.impl; + +import com.sap.cx.boosters.commercedbsync.adapter.DataRepositoryAdapter; +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.dataset.DataSet; +import com.sap.cx.boosters.commercedbsync.repository.DataRepository; +import com.sap.cx.boosters.commercedbsync.MarkersQueryDefinition; +import com.sap.cx.boosters.commercedbsync.OffsetQueryDefinition; +import com.sap.cx.boosters.commercedbsync.SeekQueryDefinition; + +import java.time.Instant; + +/** + * Controls the way the repository is accessed by adapting the most common reading + * operations based on the configured context + */ +public class ContextualDataRepositoryAdapter implements DataRepositoryAdapter { + + private DataRepository repository; + + public ContextualDataRepositoryAdapter(DataRepository repository) { + this.repository = repository; + } + + @Override + public long getRowCount(MigrationContext context, String table) throws Exception { + if(context.isDeletionEnabled() || context.isLpTableMigrationEnabled()){ + return repository.getRowCountModifiedAfter(table, getIncrementalTimestamp(context),context.isDeletionEnabled(), context.isLpTableMigrationEnabled()); + } + else{ + if (context.isIncrementalModeEnabled()) { + return repository.getRowCountModifiedAfter(table, getIncrementalTimestamp(context)); + } else { + return repository.getRowCount(table); + } + } + } + + @Override + public DataSet getAll(MigrationContext context, String table) throws Exception { + if (context.isIncrementalModeEnabled()) { + return repository.getAllModifiedAfter(table, getIncrementalTimestamp(context)); + } else { + return repository.getAll(table); + } + } + + @Override + public DataSet getBatchWithoutIdentifier(MigrationContext context, OffsetQueryDefinition queryDefinition) throws Exception { + if (context.isIncrementalModeEnabled()) { + return repository.getBatchWithoutIdentifier(queryDefinition, getIncrementalTimestamp(context)); + } else { + return repository.getBatchWithoutIdentifier(queryDefinition); + } + } + + @Override + public DataSet getBatchOrderedByColumn(MigrationContext context, SeekQueryDefinition queryDefinition) throws Exception { + if (context.isIncrementalModeEnabled()) { + return repository.getBatchOrderedByColumn(queryDefinition, getIncrementalTimestamp(context)); + } else { + return repository.getBatchOrderedByColumn(queryDefinition); + } + } + + @Override + public DataSet getBatchMarkersOrderedByColumn(MigrationContext context, MarkersQueryDefinition queryDefinition) throws Exception { + if (context.isIncrementalModeEnabled()) { + return repository.getBatchMarkersOrderedByColumn(queryDefinition, getIncrementalTimestamp(context)); + } else { + return repository.getBatchMarkersOrderedByColumn(queryDefinition); + } + } + + private Instant getIncrementalTimestamp(MigrationContext context) { + Instant incrementalTimestamp = context.getIncrementalTimestamp(); + if (incrementalTimestamp == null) { + throw new IllegalStateException("Timestamp cannot be null in incremental mode. Set a timestamp using the property " + CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_TIMESTAMP); + } + return incrementalTimestamp; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/DataPipe.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/DataPipe.java new file mode 100644 index 0000000..fd00539 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/DataPipe.java @@ -0,0 +1,24 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.concurrent; + +import javax.annotation.concurrent.ThreadSafe; + +/** + * Used to separate database reading and writing operations, after reading data from the DB, the result + * is put to the pipe and can be used by the database writer later on -> asynchronously + * + * @param + */ +@ThreadSafe +public interface DataPipe { + void requestAbort(Exception e); + + void put(MaybeFinished value) throws Exception; + + MaybeFinished get() throws Exception; +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/DataPipeFactory.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/DataPipeFactory.java new file mode 100644 index 0000000..77bdcca --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/DataPipeFactory.java @@ -0,0 +1,17 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.concurrent; + +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.dataset.DataSet; + +import javax.annotation.concurrent.ThreadSafe; + +@ThreadSafe +public interface DataPipeFactory { + DataPipe create(CopyContext context, CopyContext.DataCopyItem item) throws Exception; +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/DataWorkerExecutor.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/DataWorkerExecutor.java new file mode 100644 index 0000000..019ee9a --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/DataWorkerExecutor.java @@ -0,0 +1,17 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.concurrent; + +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.Future; + +public interface DataWorkerExecutor { + Future safelyExecute(Callable callable) throws InterruptedException; + + void waitAndRethrowUncaughtExceptions() throws ExecutionException, InterruptedException; +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/DataWorkerPoolFactory.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/DataWorkerPoolFactory.java new file mode 100644 index 0000000..64e7f0c --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/DataWorkerPoolFactory.java @@ -0,0 +1,14 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.concurrent; + +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor; + +public interface DataWorkerPoolFactory { + ThreadPoolTaskExecutor create(CopyContext context); +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/MDCTaskDecorator.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/MDCTaskDecorator.java new file mode 100644 index 0000000..8d37302 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/MDCTaskDecorator.java @@ -0,0 +1,27 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.concurrent; + +import org.slf4j.MDC; +import org.springframework.core.task.TaskDecorator; + +import java.util.Map; + +public class MDCTaskDecorator implements TaskDecorator { + @Override + public Runnable decorate(Runnable runnable) { + Map contextMap = MDC.getCopyOfContextMap(); + return () -> { + try { + MDC.setContextMap(contextMap); + runnable.run(); + } finally { + MDC.clear(); + } + }; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/MaybeFinished.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/MaybeFinished.java new file mode 100644 index 0000000..e6458da --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/MaybeFinished.java @@ -0,0 +1,49 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.concurrent; + +/** + * MaybeFinished keeps track status of the data set that is currently being processed -> if all is ok, + * then status will be done, if theres an exception, it will be poison + * + * @param + */ +public class MaybeFinished { + private final T value; + private final boolean done; + private final boolean poison; + + private MaybeFinished(T value, boolean done, boolean poison) { + this.value = value; + this.done = done; + this.poison = poison; + } + + public static MaybeFinished of(T value) { + return new MaybeFinished<>(value, false, false); + } + + public static MaybeFinished finished(T value) { + return new MaybeFinished<>(value, true, false); + } + + public static MaybeFinished poison() { + return new MaybeFinished<>(null, true, true); + } + + public T getValue() { + return value; + } + + public boolean isDone() { + return done; + } + + public boolean isPoison() { + return poison; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/PipeAbortedException.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/PipeAbortedException.java new file mode 100644 index 0000000..8fe142b --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/PipeAbortedException.java @@ -0,0 +1,17 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.concurrent; + +public class PipeAbortedException extends Exception { + public PipeAbortedException(String message) { + super(message); + } + + public PipeAbortedException(String message, Throwable cause) { + super(message, cause); + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/RetriableTask.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/RetriableTask.java new file mode 100644 index 0000000..353c218 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/RetriableTask.java @@ -0,0 +1,54 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.concurrent; + +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import org.apache.commons.lang3.exception.ExceptionUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.concurrent.Callable; + +public abstract class RetriableTask implements Callable { + + private static final Logger LOG = LoggerFactory.getLogger(RetriableTask.class); + + private CopyContext context; + private String table; + private int retryCount = 0; + + public RetriableTask(CopyContext context, String table) { + this.context = context; + this.table = table; + } + + @Override + public Boolean call() { + try { + return internalRun(); + } catch (PipeAbortedException e) { + throw new RuntimeException("Ignore retries", e); + } catch (Exception e) { + if (retryCount < context.getMigrationContext().getMaxWorkerRetryAttempts()) { + LOG.warn("Retrying failed task {} for table {}. Retry count: {}. Cause: {}", getClass().getName(), table, retryCount, e); + e.printStackTrace(); + retryCount++; + return call(); + } else { + handleFailure(e); + return Boolean.FALSE; + } + } + } + + protected void handleFailure(Exception e) { + throw new RuntimeException(ExceptionUtils.getRootCauseMessage(e), e); + } + + protected abstract Boolean internalRun() throws Exception; + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/impl/DefaultDataPipe.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/impl/DefaultDataPipe.java new file mode 100644 index 0000000..6d07f3f --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/impl/DefaultDataPipe.java @@ -0,0 +1,98 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.concurrent.impl; + +import com.sap.cx.boosters.commercedbsync.concurrent.MaybeFinished; +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import com.sap.cx.boosters.commercedbsync.scheduler.DatabaseCopyScheduler; +import com.sap.cx.boosters.commercedbsync.service.DatabaseCopyTaskRepository; +import com.sap.cx.boosters.commercedbsync.concurrent.DataPipe; +import com.sap.cx.boosters.commercedbsync.concurrent.PipeAbortedException; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.concurrent.ArrayBlockingQueue; +import java.util.concurrent.BlockingQueue; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicReference; + +public class DefaultDataPipe implements DataPipe { + private static final Logger LOG = LoggerFactory.getLogger(DefaultDataPipe.class); + + private final BlockingQueue> queue; + private final int defaultTimeout; + private final AtomicReference abortException = new AtomicReference<>(); + private final CopyContext context; + private final CopyContext.DataCopyItem copyItem; + private final DatabaseCopyTaskRepository taskRepository; + private final DatabaseCopyScheduler scheduler; + + public DefaultDataPipe(DatabaseCopyScheduler scheduler, DatabaseCopyTaskRepository taskRepository, CopyContext context, CopyContext.DataCopyItem copyItem, int timeoutInSeconds, int capacity) { + this.taskRepository = taskRepository; + this.scheduler = scheduler; + this.context = context; + this.copyItem = copyItem; + this.queue = new ArrayBlockingQueue<>(capacity); + defaultTimeout = timeoutInSeconds; + } + + @Override + public void requestAbort(Exception cause) { + if (this.abortException.compareAndSet(null, cause)) { + if (context.getMigrationContext().isFailOnErrorEnabled()) { + try { + scheduler.abort(context); + } catch (Exception ex) { + LOG.warn("could not abort", ex); + } + } + try { + taskRepository.markTaskFailed(context, copyItem, cause); + } catch (Exception e) { + LOG.warn("could not update error status!", e); + } + try { + this.queue.offer(MaybeFinished.poison(), defaultTimeout, TimeUnit.SECONDS); + } catch (InterruptedException e) { + LOG.warn("Could not flush pipe with poison", e); + } + } + } + + private boolean isAborted() throws Exception { + if (this.abortException.get() == null && scheduler.isAborted(this.context)) { + this.requestAbort(new PipeAbortedException("Migration aborted")); + } + return this.abortException.get() != null; + } + + @Override + public void put(MaybeFinished value) throws Exception { + if (isAborted()) { + throw new PipeAbortedException("pipe aborted", this.abortException.get()); + } + if (!queue.offer(value, defaultTimeout, TimeUnit.SECONDS)) { + throw new RuntimeException("cannot put new item in time"); + } + } + + @Override + public MaybeFinished get() throws Exception { + if (isAborted()) { + throw new PipeAbortedException("pipe aborted", this.abortException.get()); + } + MaybeFinished element = queue.poll(defaultTimeout, TimeUnit.SECONDS); + if (isAborted()) { + throw new PipeAbortedException("pipe aborted", this.abortException.get()); + } + if (element == null) { + throw new RuntimeException(String.format("cannot get new item in time. Consider increasing the value of the property '%s' or '%s'", CommercedbsyncConstants.MIGRATION_DATA_PIPE_TIMEOUT, CommercedbsyncConstants.MIGRATION_DATA_PIPE_CAPACITY)); + } + return element; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/impl/DefaultDataPipeFactory.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/impl/DefaultDataPipeFactory.java new file mode 100644 index 0000000..abafa03 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/impl/DefaultDataPipeFactory.java @@ -0,0 +1,319 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.concurrent.impl; + +import com.sap.cx.boosters.commercedbsync.concurrent.DataWorkerExecutor; +import com.sap.cx.boosters.commercedbsync.concurrent.MaybeFinished; +import com.sap.cx.boosters.commercedbsync.dataset.DataSet; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceCategory; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceRecorder; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceUnit; +import com.sap.cx.boosters.commercedbsync.scheduler.DatabaseCopyScheduler; +import com.sap.cx.boosters.commercedbsync.service.DatabaseCopyTaskRepository; +import org.apache.commons.lang3.tuple.Pair; +import com.sap.cx.boosters.commercedbsync.MarkersQueryDefinition; +import com.sap.cx.boosters.commercedbsync.OffsetQueryDefinition; +import com.sap.cx.boosters.commercedbsync.SeekQueryDefinition; +import com.sap.cx.boosters.commercedbsync.adapter.DataRepositoryAdapter; +import com.sap.cx.boosters.commercedbsync.adapter.impl.ContextualDataRepositoryAdapter; +import com.sap.cx.boosters.commercedbsync.concurrent.DataPipe; +import com.sap.cx.boosters.commercedbsync.concurrent.DataPipeFactory; +import com.sap.cx.boosters.commercedbsync.concurrent.DataWorkerPoolFactory; +import com.sap.cx.boosters.commercedbsync.concurrent.RetriableTask; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.core.task.AsyncTaskExecutor; +import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor; + +import java.util.List; +import java.util.Optional; +import java.util.Set; +import java.util.stream.Collectors; + +public class DefaultDataPipeFactory implements DataPipeFactory { + + private static final Logger LOG = LoggerFactory.getLogger(DefaultDataPipeFactory.class); + + private final DatabaseCopyTaskRepository taskRepository; + private final DatabaseCopyScheduler scheduler; + private final AsyncTaskExecutor executor; + private final DataWorkerPoolFactory dataReadWorkerPoolFactory; + + public DefaultDataPipeFactory(DatabaseCopyScheduler scheduler, DatabaseCopyTaskRepository taskRepository, AsyncTaskExecutor executor, DataWorkerPoolFactory dataReadWorkerPoolFactory) { + this.scheduler = scheduler; + this.taskRepository = taskRepository; + this.executor = executor; + this.dataReadWorkerPoolFactory = dataReadWorkerPoolFactory; + } + + @Override + public DataPipe create(CopyContext context, CopyContext.DataCopyItem item) throws Exception { + int dataPipeTimeout = context.getMigrationContext().getDataPipeTimeout(); + int dataPipeCapacity = context.getMigrationContext().getDataPipeCapacity(); + DataPipe pipe = new DefaultDataPipe<>(scheduler, taskRepository, context, item, dataPipeTimeout, dataPipeCapacity); + ThreadPoolTaskExecutor taskExecutor = dataReadWorkerPoolFactory.create(context); + DataWorkerExecutor workerExecutor = new DefaultDataWorkerExecutor<>(taskExecutor); + try { + executor.submit(() -> { + try { + scheduleWorkers(context, workerExecutor, pipe, item); + workerExecutor.waitAndRethrowUncaughtExceptions(); + pipe.put(MaybeFinished.finished(DataSet.EMPTY)); + } catch (Exception e) { + LOG.error("Error scheduling worker tasks ", e); + try { + pipe.put(MaybeFinished.poison()); + } catch (Exception p) { + LOG.error("Cannot contaminate pipe ", p); + } + } finally { + if (taskExecutor != null) { + taskExecutor.shutdown(); + } + } + }); + } catch (Exception e) { + throw new RuntimeException("Error invoking reader tasks ", e); + } + return pipe; + } + + private void scheduleWorkers(CopyContext context, DataWorkerExecutor workerExecutor, DataPipe pipe, CopyContext.DataCopyItem copyItem) throws Exception { + DataRepositoryAdapter dataRepositoryAdapter = new ContextualDataRepositoryAdapter(context.getMigrationContext().getDataSourceRepository()); + String table = copyItem.getSourceItem(); + long totalRows = copyItem.getRowCount(); + long pageSize = context.getMigrationContext().getReaderBatchSize(); + try { + PerformanceRecorder recorder = context.getPerformanceProfiler().createRecorder(PerformanceCategory.DB_READ, table); + recorder.start(); + + PipeTaskContext pipeTaskContext = new PipeTaskContext(context, pipe, table, dataRepositoryAdapter, pageSize, recorder); + + String batchColumn = ""; + // help.sap.com/viewer/d0224eca81e249cb821f2cdf45a82ace/LATEST/en-US/08a27931a21441b59094c8a6aa2a880e.html + if (context.getMigrationContext().getDataSourceRepository().isAuditTable(table) && + context.getMigrationContext().getDataSourceRepository().getAllColumnNames(table).contains("ID")) { + batchColumn = "ID"; + } else if (context.getMigrationContext().getDataSourceRepository().getAllColumnNames(table).contains("PK")) { + batchColumn = "PK"; + } + LOG.debug("Using batchColumn: {}", batchColumn.isEmpty() ? "NONE" : batchColumn); + + if (batchColumn.isEmpty()) { + // trying offset queries with unique index columns + Set batchColumns; + DataSet uniqueColumns = context.getMigrationContext().getDataSourceRepository().getUniqueColumns(table); + if (uniqueColumns.isNotEmpty()) { + if (uniqueColumns.getColumnCount() == 0) { + throw new IllegalStateException("Corrupt dataset retrieved. Dataset should have information about unique columns"); + } + batchColumns = uniqueColumns.getAllResults().stream().map(row -> String.valueOf(row.get(0))).collect(Collectors.toSet()); + for (int offset = 0; offset < totalRows; offset += pageSize) { + DataReaderTask dataReaderTask = new BatchOffsetDataReaderTask(pipeTaskContext, offset, batchColumns); + workerExecutor.safelyExecute(dataReaderTask); + } + } else { + //If no unique columns available to do batch sorting, fallback to read all + LOG.warn("Reading all rows at once without batching for table {}. Memory consumption might be negatively affected", table); + DataReaderTask dataReaderTask = new DefaultDataReaderTask(pipeTaskContext); + workerExecutor.safelyExecute(dataReaderTask); + } + } else { + // do the pagination by value comparison + MarkersQueryDefinition queryDefinition = new MarkersQueryDefinition(); + queryDefinition.setTable(table); + queryDefinition.setColumn(batchColumn); + queryDefinition.setBatchSize(pageSize); + queryDefinition.setDeletionEnabled(context.getMigrationContext().isDeletionEnabled()); + queryDefinition.setLpTableEnabled(context.getMigrationContext().isLpTableMigrationEnabled()); + DataSet batchMarkers = dataRepositoryAdapter.getBatchMarkersOrderedByColumn(context.getMigrationContext(), queryDefinition); + List> batchMarkersList = batchMarkers.getAllResults(); + if (batchMarkersList.isEmpty()) { + throw new RuntimeException("Could not retrieve batch values for table " + table); + } + for (int i = 0; i < batchMarkersList.size(); i++) { + List lastBatchMarkerRow = batchMarkersList.get(i); + Optional> nextBatchMarkerRow = Optional.empty(); + int nextIndex = i + 1; + if (nextIndex < batchMarkersList.size()) { + nextBatchMarkerRow = Optional.of(batchMarkersList.get(nextIndex)); + } + DataReaderTask dataReaderTask = new BatchMarkerDataReaderTask(pipeTaskContext, batchColumn, Pair.of(lastBatchMarkerRow, nextBatchMarkerRow)); + workerExecutor.safelyExecute(dataReaderTask); + } + } + } catch (Exception ex) { + LOG.error("{{}}: Exception while preparing reader tasks", table, ex); + pipe.requestAbort(ex); + if (ex instanceof InterruptedException) { + Thread.currentThread().interrupt(); + } + throw new RuntimeException("Exception while preparing reader tasks", ex); + } + } + + private static abstract class DataReaderTask extends RetriableTask { + private static final Logger LOG = LoggerFactory.getLogger(DataReaderTask.class); + + private PipeTaskContext pipeTaskContext; + + public DataReaderTask(PipeTaskContext pipeTaskContext) { + super(pipeTaskContext.getContext(), pipeTaskContext.getTable()); + this.pipeTaskContext = pipeTaskContext; + } + + public PipeTaskContext getPipeTaskContext() { + return pipeTaskContext; + } + } + + private static class DefaultDataReaderTask extends DataReaderTask { + + public DefaultDataReaderTask(PipeTaskContext pipeTaskContext) { + super(pipeTaskContext); + } + + @Override + protected Boolean internalRun() throws Exception { + process(); + return Boolean.TRUE; + } + + private void process() throws Exception { + MigrationContext migrationContext = getPipeTaskContext().getContext().getMigrationContext(); + DataSet all = getPipeTaskContext().getDataRepositoryAdapter().getAll(migrationContext, getPipeTaskContext().getTable()); + getPipeTaskContext().getRecorder().record(PerformanceUnit.ROWS, all.getAllResults().size()); + getPipeTaskContext().getPipe().put(MaybeFinished.of(all)); + } + } + + private static class BatchOffsetDataReaderTask extends DataReaderTask { + + private long offset = 0; + private Set batchColumns; + + public BatchOffsetDataReaderTask(PipeTaskContext pipeTaskContext, long offset, Set batchColumns) { + super(pipeTaskContext); + this.offset = offset; + this.batchColumns = batchColumns; + } + + @Override + protected Boolean internalRun() throws Exception { + process(); + return Boolean.TRUE; + } + + private void process() throws Exception { + DataRepositoryAdapter adapter = getPipeTaskContext().getDataRepositoryAdapter(); + CopyContext context = getPipeTaskContext().getContext(); + String table = getPipeTaskContext().getTable(); + long pageSize = getPipeTaskContext().getPageSize(); + OffsetQueryDefinition queryDefinition = new OffsetQueryDefinition(); + queryDefinition.setTable(table); + queryDefinition.setAllColumns(batchColumns); + queryDefinition.setBatchSize(pageSize); + queryDefinition.setOffset(offset); + queryDefinition.setDeletionEnabled(context.getMigrationContext().isDeletionEnabled()); + queryDefinition.setLpTableEnabled(context.getMigrationContext().isLpTableMigrationEnabled()); + DataSet result = adapter.getBatchWithoutIdentifier(context.getMigrationContext(), queryDefinition); + getPipeTaskContext().getRecorder().record(PerformanceUnit.ROWS, result.getAllResults().size()); + getPipeTaskContext().getPipe().put(MaybeFinished.of(result)); + } + } + + private static class BatchMarkerDataReaderTask extends DataReaderTask { + + private final String batchColumn; + private final Pair, Optional>> batchMarkersPair; + + public BatchMarkerDataReaderTask(PipeTaskContext pipeTaskContext, String batchColumn, Pair, Optional>> batchMarkersPair) { + super(pipeTaskContext); + this.batchColumn = batchColumn; + this.batchMarkersPair = batchMarkersPair; + } + + @Override + protected Boolean internalRun() throws Exception { + List lastBatchMarker = batchMarkersPair.getLeft(); + Optional> nextBatchMarker = batchMarkersPair.getRight(); + if (lastBatchMarker != null && lastBatchMarker.size() == 2) { + Object lastBatchValue = lastBatchMarker.get(0); + process(lastBatchValue, nextBatchMarker.map(v -> v.get(0))); + return Boolean.TRUE; + } else { + throw new IllegalArgumentException("Invalid batch marker passed to task"); + } + } + + private void process(Object lastValue, Optional nextValue) throws Exception { + CopyContext ctx = getPipeTaskContext().getContext(); + DataRepositoryAdapter adapter = getPipeTaskContext().getDataRepositoryAdapter(); + String table = getPipeTaskContext().getTable(); + long pageSize = getPipeTaskContext().getPageSize(); + SeekQueryDefinition queryDefinition = new SeekQueryDefinition(); + queryDefinition.setTable(table); + queryDefinition.setColumn(batchColumn); + queryDefinition.setLastColumnValue(lastValue); + queryDefinition.setNextColumnValue(nextValue.orElseGet(() -> null)); + queryDefinition.setBatchSize(pageSize); + queryDefinition.setDeletionEnabled(ctx.getMigrationContext().isDeletionEnabled()); + queryDefinition.setLpTableEnabled(ctx.getMigrationContext().isLpTableMigrationEnabled()); + DataSet page = adapter.getBatchOrderedByColumn(ctx.getMigrationContext(), queryDefinition); + getPipeTaskContext().getRecorder().record(PerformanceUnit.ROWS, pageSize); + getPipeTaskContext().getPipe().put(MaybeFinished.of(page)); + } + } + + private static class PipeTaskContext { + private CopyContext context; + private DataPipe pipe; + private String table; + private DataRepositoryAdapter dataRepositoryAdapter; + private long pageSize; + private PerformanceRecorder recorder; + + public PipeTaskContext(CopyContext context, DataPipe pipe, String table, DataRepositoryAdapter dataRepositoryAdapter, long pageSize, PerformanceRecorder recorder) { + this.context = context; + this.pipe = pipe; + this.table = table; + this.dataRepositoryAdapter = dataRepositoryAdapter; + this.pageSize = pageSize; + this.recorder = recorder; + } + + public CopyContext getContext() { + return context; + } + + public DataPipe getPipe() { + return pipe; + } + + public String getTable() { + return table; + } + + public DataRepositoryAdapter getDataRepositoryAdapter() { + return dataRepositoryAdapter; + } + + public long getPageSize() { + return pageSize; + } + + public PerformanceRecorder getRecorder() { + return recorder; + } + + } + +} + + diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/impl/DefaultDataWorkerExecutor.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/impl/DefaultDataWorkerExecutor.java new file mode 100644 index 0000000..62d53ce --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/impl/DefaultDataWorkerExecutor.java @@ -0,0 +1,64 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.concurrent.impl; + +import com.sap.cx.boosters.commercedbsync.concurrent.DataWorkerExecutor; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.core.task.AsyncTaskExecutor; +import org.springframework.core.task.TaskRejectedException; +import org.springframework.util.backoff.BackOffExecution; +import org.springframework.util.backoff.ExponentialBackOff; + +import java.util.ArrayDeque; +import java.util.Queue; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.Future; + +public class DefaultDataWorkerExecutor implements DataWorkerExecutor { + + private static final Logger LOG = LoggerFactory.getLogger(DefaultDataWorkerExecutor.class); + + private AsyncTaskExecutor executor; + private Queue> futures = new ArrayDeque<>(); + + + public DefaultDataWorkerExecutor(AsyncTaskExecutor executor) { + this.executor = executor; + } + + @Override + public Future safelyExecute(Callable callable) throws InterruptedException { + Future future = internalSafelyExecute(callable, 0); + futures.add(future); + return future; + } + + private Future internalSafelyExecute(Callable callable, int rejections) throws InterruptedException { + try { + return executor.submit(callable); + } catch (TaskRejectedException e) { + BackOffExecution backOff = new ExponentialBackOff().start(); + long waitInterval = backOff.nextBackOff(); + for (int i = 0; i < rejections; i++) { + waitInterval = backOff.nextBackOff(); + } + LOG.trace("worker rejected. Retrying in {}ms...", waitInterval); + Thread.sleep(waitInterval); + return internalSafelyExecute(callable, rejections + 1); + } + } + + @Override + public void waitAndRethrowUncaughtExceptions() throws ExecutionException, InterruptedException { + Future future; + while ((future = futures.poll()) != null) { + future.get(); + } + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/impl/DefaultDataWorkerPoolFactory.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/impl/DefaultDataWorkerPoolFactory.java new file mode 100644 index 0000000..8de0f1e --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/concurrent/impl/DefaultDataWorkerPoolFactory.java @@ -0,0 +1,48 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.concurrent.impl; + +import com.sap.cx.boosters.commercedbsync.concurrent.DataWorkerPoolFactory; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import org.springframework.core.task.TaskDecorator; +import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor; + +public class DefaultDataWorkerPoolFactory implements DataWorkerPoolFactory { + + private TaskDecorator taskDecorator; + private String threadNamePrefix; + private int corePoolSize; + private int maxPoolSize; + private int keepAliveSeconds; + private int queueCapacity = 2147483647; + + public DefaultDataWorkerPoolFactory(TaskDecorator taskDecorator, String threadNamePrefix, int maxPoolSize, int keepAliveSeconds, boolean queueable) { + this.taskDecorator = taskDecorator; + this.threadNamePrefix = threadNamePrefix; + this.maxPoolSize = maxPoolSize; + this.keepAliveSeconds = keepAliveSeconds; + this.queueCapacity = queueable ? this.queueCapacity : 0; + this.corePoolSize = maxPoolSize; + } + + @Override + public ThreadPoolTaskExecutor create(CopyContext context) { + ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor(); + executor.setTaskDecorator(taskDecorator); + executor.setThreadNamePrefix(threadNamePrefix); + executor.setCorePoolSize(corePoolSize); + executor.setMaxPoolSize(maxPoolSize); + executor.setQueueCapacity(queueCapacity); + executor.setKeepAliveSeconds(keepAliveSeconds); + executor.setAllowCoreThreadTimeOut(true); + executor.setWaitForTasksToCompleteOnShutdown(true); + executor.setAwaitTerminationSeconds(Integer.MAX_VALUE); + executor.initialize(); + return executor; + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/constants/CommercedbsyncConstants.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/constants/CommercedbsyncConstants.java new file mode 100644 index 0000000..3119bb9 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/constants/CommercedbsyncConstants.java @@ -0,0 +1,87 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.constants; + +import com.sap.cx.boosters.commercedbsync.constants.GeneratedCommercedbsyncConstants; + +/** + * Global class for all Commercedbsync constants. You can add global constants for your extension into this class. + */ +public final class CommercedbsyncConstants extends GeneratedCommercedbsyncConstants { + public static final String EXTENSIONNAME = "commercedbsync"; + public static final String PROPERTIES_PREFIX = "migration"; + public static final String MIGRATION_TRIGGER_UPDATESYSTEM = "migration.trigger.updatesystem"; + public static final String MIGRATION_SCHEMA_ENABLED = "migration.schema.enabled"; + public static final String MIGRATION_SCHEMA_TARGET_TABLES_ADD_ENABLED = "migration.schema.target.tables.add.enabled"; + public static final String MIGRATION_SCHEMA_TARGET_TABLES_REMOVE_ENABLED = "migration.schema.target.tables.remove.enabled"; + public static final String MIGRATION_SCHEMA_TARGET_COLUMNS_ADD_ENABLED = "migration.schema.target.columns.add.enabled"; + public static final String MIGRATION_SCHEMA_TARGET_COLUMNS_REMOVE_ENABLED = "migration.schema.target.columns.remove.enabled"; + public static final String MIGRATION_TARGET_MAX_STAGE_MIGRATIONS = "migration.ds.target.db.max.stage.migrations"; + public static final String MIGRATION_SCHEMA_AUTOTRIGGER_ENABLED = "migration.schema.autotrigger.enabled"; + public static final String MIGRATION_DATA_READER_BATCHSIZE = "migration.data.reader.batchsize"; + public static final String MIGRATION_DATA_TRUNCATE_ENABLED = "migration.data.truncate.enabled"; + public static final String MIGRATION_DATA_TRUNCATE_EXCLUDED = "migration.data.truncate.excluded"; + public static final String MIGRATION_DATA_WORKERS_READER_MAXTASKS = "migration.data.workers.reader.maxtasks"; + public static final String MIGRATION_DATA_WORKERS_WRITER_MAXTASKS = "migration.data.workers.writer.maxtasks"; + public static final String MIGRATION_DATA_WORKERS_RETRYATTEMPTS = "migration.data.workers.retryattempts"; + public static final String MIGRATION_DATA_MAXPRALLELTABLECOPY = "migration.data.maxparalleltablecopy"; + public static final String MIGRATION_DATA_FAILONEERROR_ENABLED = "migration.data.failonerror.enabled"; + public static final String MIGRATION_DATA_COLUMNS_EXCLUDED = "migration.data.columns.excluded"; + public static final String MIGRATION_DATA_COLUMNS_NULLIFY = "migration.data.columns.nullify"; + public static final String MIGRATION_DATA_INDICES_DROP_ENABLED = "migration.data.indices.drop.enabled"; + public static final String MIGRATION_DATA_INDICES_DISABLE_ENABLED = "migration.data.indices.disable.enabled"; + public static final String MIGRATION_DATA_INDICES_DISABLE_INCLUDED = "migration.data.indices.disable.included"; + public static final String MIGRATION_DATA_TABLES_AUDIT_ENABLED = "migration.data.tables.audit.enabled"; + public static final String MIGRATION_DATA_TABLES_CUSTOM = "migration.data.tables.custom"; + public static final String MIGRATION_DATA_TABLES_EXCLUDED = "migration.data.tables.excluded"; + public static final String MIGRATION_DATA_TABLES_INCLUDED = "migration.data.tables.included"; + public static final String MIGRATION_CLUSTER_ENABLED = "migration.cluster.enabled"; + public static final String MIGRATION_DATA_INCREMENTAL_ENABLED = "migration.data.incremental.enabled"; + public static final String MIGRATION_DATA_INCREMENTAL_TABLES = "migration.data.incremental.tables"; + public static final String MIGRATION_DATA_INCREMENTAL_TIMESTAMP = "migration.data.incremental.timestamp"; + public static final String MIGRATION_DATA_BULKCOPY_ENABLED = "migration.data.bulkcopy.enabled"; + public static final String MIGRATION_DATA_PIPE_TIMEOUT = "migration.data.pipe.timeout"; + public static final String MIGRATION_DATA_PIPE_CAPACITY = "migration.data.pipe.capacity"; + public static final String MIGRATION_STALLED_TIMEOUT = "migration.stalled.timeout"; + public static final String MIGRATION_DATA_REPORT_CONNECTIONSTRING = "migration.data.report.connectionstring"; + public static final String MIGRATION_DATATYPE_CHECK = "migration.datatype.check"; + public static final String MIGRATION_TABLESPREFIX = "MIGRATIONTOOLKIT_"; + + public static final String MDC_MIGRATIONID = "migrationID"; + public static final String MDC_PIPELINE = "pipeline"; + public static final String MDC_CLUSTERID = "clusterID"; + + public static final String DEPLOYMENTS_TABLE = "ydeployments"; + + + // Masking + public static final String MIGRATION_REPORT_MASKED_PROPERTIES = "migration.properties.masked"; + public static final String MASKED_VALUE = "***"; + + // Locale + public static final String MIGRATION_LOCALE_DEFAULT = "migration.locale.default"; + + // Incremental support + public static final String MIGRATION_DATA_INCREMENTAL_DELETIONS_ITEMTYPES = "migration.data.incremental.deletions.itemtypes"; + public static final String MIGRATION_DATA_INCREMENTAL_DELETIONS_TYPECODES = "migration.data.incremental.deletions.typecodes"; + public static final String MIGRATION_DATA_INCREMENTAL_DELETIONS_ITEMTYPES_ENABLED = "migration.data.incremental.deletions.itemtypes.enabled"; + public static final String MIGRATION_DATA_INCREMENTAL_DELETIONS_TYPECODES_ENABLED = "migration.data.incremental.deletions.typecodes.enabled"; + public static final String MIGRATION_DATA_DELETION_ENABLED = "migration.data.incremental.deletions.enabled"; + public static final String MIGRATION_DATA_DELETION_TABLE = "migration.data.incremental.deletions.table"; + + // ORACLE_TARGET -- START + public static final String MIGRATION_ORACLE_MAX = "VARCHAR2\\(2147483647\\)"; + public static final String MIGRATION_ORACLE_CLOB = "CLOB"; + public static final String MIGRATION_ORACLE_VARCHAR24k = "VARCHAR2(4000)"; + + // ORACLE_TARGET -- END + + private CommercedbsyncConstants() { + // empty to avoid instantiating this constant class + } + + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/CopyContext.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/CopyContext.java new file mode 100644 index 0000000..66f8cf8 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/CopyContext.java @@ -0,0 +1,133 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.context; + +import com.sap.cx.boosters.commercedbsync.performance.PerformanceProfiler; + +import java.util.Map; +import java.util.Objects; +import java.util.Set; +import java.util.StringJoiner; +import java.util.TreeMap; + +/** + * Contains the Information needed to Copy Data + */ +public class CopyContext { + + private String migrationId; + private MigrationContext migrationContext; + private Set copyItems; + private PerformanceProfiler performanceProfiler; + + public CopyContext(String migrationId, MigrationContext migrationContext, Set copyItems, PerformanceProfiler performanceProfiler) { + this.migrationId = migrationId; + this.migrationContext = migrationContext; + this.copyItems = copyItems; + this.performanceProfiler = performanceProfiler; + } + + public IdCopyContext toIdCopyContext() { + return new IdCopyContext(migrationId, migrationContext, performanceProfiler); + } + + public MigrationContext getMigrationContext() { + return migrationContext; + } + + /** + * Media Items to be Copied + * + * @return + */ + public Set getCopyItems() { + return copyItems; + } + + public String getMigrationId() { + return migrationId; + } + + public PerformanceProfiler getPerformanceProfiler() { + return performanceProfiler; + } + + public static class DataCopyItem { + private final String sourceItem; + private final String targetItem; + private final Map columnMap = new TreeMap<>(String.CASE_INSENSITIVE_ORDER); + private final Long rowCount; + + public DataCopyItem(String sourceItem, String targetItem) { + this.sourceItem = sourceItem; + this.targetItem = targetItem; + this.rowCount = null; + } + + public DataCopyItem(String sourceItem, String targetItem, Map columnMap, Long rowCount) { + this.sourceItem = sourceItem; + this.targetItem = targetItem; + this.columnMap.clear(); + this.columnMap.putAll(columnMap); + this.rowCount = rowCount; + } + + public String getSourceItem() { + return sourceItem; + } + + public String getTargetItem() { + return targetItem; + } + + public String getPipelineName() { + return getSourceItem() + "->" + getTargetItem(); + } + + public Map getColumnMap() { + return columnMap; + } + + public Long getRowCount() { + return rowCount; + } + + @Override + public String toString() { + return new StringJoiner(", ", DataCopyItem.class.getSimpleName() + "[", "]") + .add("sourceItem='" + sourceItem + "'") + .add("targetItem='" + targetItem + "'") + .toString(); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + DataCopyItem that = (DataCopyItem) o; + return getSourceItem().equals(that.getSourceItem()) && + getTargetItem().equals(that.getTargetItem()); + } + + @Override + public int hashCode() { + return Objects.hash(getSourceItem(), getTargetItem()); + } + } + + public static class IdCopyContext extends CopyContext { + + public IdCopyContext(String migrationId, MigrationContext migrationContext, PerformanceProfiler performanceProfiler) { + super(migrationId, migrationContext, null, performanceProfiler); + } + + @Override + public Set getCopyItems() { + throw new UnsupportedOperationException("This is lean copy context without the actual copy items"); + } + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/IncrementalMigrationContext.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/IncrementalMigrationContext.java new file mode 100644 index 0000000..54808cc --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/IncrementalMigrationContext.java @@ -0,0 +1,34 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.context; + +import java.time.Instant; +import java.util.Set; + +/** + * The MigrationContext contains all information needed to perform a Source -> Target Migration + */ +public interface IncrementalMigrationContext extends MigrationContext { + + Instant getIncrementalMigrationTimestamp(); + + public void setSchemaMigrationAutoTriggerEnabled(final boolean autoTriggerEnabled); + + public void setTruncateEnabled(final boolean truncateEnabled); + + void setIncrementalMigrationTimestamp(final Instant timeStampInstant); + + Set setIncrementalTables(final Set incrementalTables); + + void setIncrementalModeEnabled(final boolean incrementalModeEnabled); + + void setIncludedTables(final Set includedTables); + + public void setDeletionEnabled(boolean deletionEnabled); + + public void setLpTableMigrationEnabled(boolean lpTableMigrationEnabled); +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/MigrationContext.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/MigrationContext.java new file mode 100644 index 0000000..8381264 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/MigrationContext.java @@ -0,0 +1,95 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.context; + +import com.sap.cx.boosters.commercedbsync.repository.DataRepository; + +import java.time.Instant; +import java.util.Map; +import java.util.Set; + +/** + * The MigrationContext contains all information needed to perform a Source -> Target Migration + */ +public interface MigrationContext { + DataRepository getDataSourceRepository(); + + DataRepository getDataTargetRepository(); + + boolean isMigrationTriggeredByUpdateProcess(); + + boolean isSchemaMigrationEnabled(); + + boolean isAddMissingTablesToSchemaEnabled(); + + boolean isRemoveMissingTablesToSchemaEnabled(); + + boolean isAddMissingColumnsToSchemaEnabled(); + + boolean isRemoveMissingColumnsToSchemaEnabled(); + + boolean isSchemaMigrationAutoTriggerEnabled(); + + int getReaderBatchSize(); + + boolean isTruncateEnabled(); + + boolean isAuditTableMigrationEnabled(); + + Set getTruncateExcludedTables(); + + int getMaxParallelReaderWorkers(); + + int getMaxParallelWriterWorkers(); + + int getMaxParallelTableCopy(); + + int getMaxWorkerRetryAttempts(); + + boolean isFailOnErrorEnabled(); + + Map> getExcludedColumns(); + + Map> getNullifyColumns(); + + Set getCustomTables(); + + Set getExcludedTables(); + + Set getIncludedTables(); + + boolean isDropAllIndexesEnabled(); + + boolean isDisableAllIndexesEnabled(); + + Set getDisableAllIndexesIncludedTables(); + + boolean isClusterMode(); + + boolean isIncrementalModeEnabled(); + + Set getIncrementalTables(); + + Instant getIncrementalTimestamp(); + + boolean isBulkCopyEnabled(); + + int getDataPipeTimeout(); + + int getDataPipeCapacity(); + + int getStalledTimeout(); + + String getMigrationReportConnectionString(); + + int getMaxTargetStagedMigrations(); + + boolean isDeletionEnabled(); + + boolean isLpTableMigrationEnabled(); + void refreshSelf(); +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/impl/DefaultIncrementalMigrationContext.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/impl/DefaultIncrementalMigrationContext.java new file mode 100644 index 0000000..748a444 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/impl/DefaultIncrementalMigrationContext.java @@ -0,0 +1,146 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.context.impl; + +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import org.apache.commons.collections4.CollectionUtils; +import org.apache.commons.configuration.Configuration; +import org.apache.commons.lang.StringUtils; +import org.apache.log4j.Logger; +import com.sap.cx.boosters.commercedbsync.context.IncrementalMigrationContext; +import com.sap.cx.boosters.commercedbsync.repository.impl.DataRepositoryFactory; + +import java.time.Instant; +import java.time.ZonedDateTime; +import java.time.format.DateTimeFormatter; +import java.util.Arrays; +import java.util.Collections; +import java.util.Set; +import java.util.TreeSet; +import java.util.stream.Collectors; + +public class DefaultIncrementalMigrationContext extends DefaultMigrationContext implements IncrementalMigrationContext { + + private static final Logger LOG = Logger.getLogger(DefaultIncrementalMigrationContext.class.getName()); + private Instant timestampInstant; + private Set incrementalTables; + private Set includedTables; + + + public DefaultIncrementalMigrationContext(DataSourceConfiguration sourceDataSourceConfiguration, DataSourceConfiguration targetDataSourceConfiguration, DataRepositoryFactory dataRepositoryFactory, Configuration configuration) throws Exception { + super(sourceDataSourceConfiguration, targetDataSourceConfiguration, dataRepositoryFactory, configuration); + } + + @Override + public Instant getIncrementalMigrationTimestamp() { + return timestampInstant; + } + + @Override + public void setSchemaMigrationAutoTriggerEnabled(boolean autoTriggerEnabled) { + configuration.setProperty(CommercedbsyncConstants.MIGRATION_SCHEMA_AUTOTRIGGER_ENABLED, + String.valueOf(autoTriggerEnabled)); + } + + @Override + public void setTruncateEnabled(boolean truncateEnabled) { + configuration.setProperty(CommercedbsyncConstants.MIGRATION_DATA_TRUNCATE_ENABLED, + String.valueOf(truncateEnabled)); + } + + @Override + public void setIncrementalMigrationTimestamp(Instant timeStampInstant) { + this.timestampInstant = timeStampInstant; + } + + @Override + public Set setIncrementalTables(Set incrementalTables) { + return this.incrementalTables = incrementalTables; + } + + + @Override + public Set getIncrementalTables() { + return CollectionUtils.isNotEmpty(this.incrementalTables) ? + this.incrementalTables : getListProperty(CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_TABLES); + } + + @Override + public void setIncrementalModeEnabled(boolean incrementalModeEnabled) { + configuration.setProperty(CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_ENABLED, + Boolean.toString(incrementalModeEnabled)); + } + + + @Override + public Instant getIncrementalTimestamp() { + if (null != getIncrementalMigrationTimestamp()) { + if (LOG.isDebugEnabled()) { + LOG.debug("Here getIncrementalTimestamp(): " + timestampInstant); + } + return getIncrementalMigrationTimestamp(); + } + String timeStamp = getStringProperty(CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_TIMESTAMP); + if (StringUtils.isEmpty(timeStamp)) { + return null; + } + return ZonedDateTime.parse(timeStamp, DateTimeFormatter.ISO_ZONED_DATE_TIME).toInstant(); + } + + @Override + public Set getIncludedTables() { + if (isIncrementalModeEnabled()) { + return Collections.emptySet(); + } + return CollectionUtils.isNotEmpty(includedTables) ? includedTables : + getListProperty(CommercedbsyncConstants.MIGRATION_DATA_TABLES_INCLUDED); + } + + @Override + public void setIncludedTables(Set includedTables) { + this.includedTables = includedTables; + } + + @Override + public void setDeletionEnabled(boolean deletionEnabled) { + this.deletionEnabled = deletionEnabled; + } + + @Override + public void setLpTableMigrationEnabled(boolean lpTableMigrationEnabled) { + this.lpTableMigrationEnabled = lpTableMigrationEnabled; + } + + private Set getListProperty(final String key) { + final String tables = super.configuration.getString(key); + + if (StringUtils.isEmpty(tables)) { + return Collections.emptySet(); + } + + final Set result = new TreeSet<>(String.CASE_INSENSITIVE_ORDER); + final String[] tablesArray = tables.split(","); + result.addAll(Arrays.stream(tablesArray).collect(Collectors.toSet())); + + return result; + } + // ORACLE_TARGET -- START + /* + * Fire this method only from HAC controller...not from the jobs. + */ + @Override + public void refreshSelf() + { + LOG.info("Refreshing Context"); + // lists + this.setIncludedTables(Collections.emptySet()); + this.setIncrementalTables(Collections.emptySet()); + this.setIncrementalMigrationTimestamp(null); + } + // ORACLE_TARGET -- END +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/impl/DefaultMigrationContext.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/impl/DefaultMigrationContext.java new file mode 100644 index 0000000..6c46062 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/impl/DefaultMigrationContext.java @@ -0,0 +1,306 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.context.impl; + + +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import org.apache.commons.configuration.Configuration; +import org.apache.commons.lang.StringUtils; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.repository.DataRepository; +import com.sap.cx.boosters.commercedbsync.repository.impl.DataRepositoryFactory; + +import java.time.Instant; +import java.time.ZonedDateTime; +import java.time.format.DateTimeFormatter; +import java.util.Arrays; +import java.util.Collections; +import java.util.Iterator; +import java.util.Locale; +import java.util.Map; +import java.util.Set; +import java.util.TreeMap; +import java.util.TreeSet; +import java.util.stream.Collectors; + +public class DefaultMigrationContext implements MigrationContext { + private final DataRepository dataSourceRepository; + private final DataRepository dataTargetRepository; + protected boolean deletionEnabled; + protected boolean lpTableMigrationEnabled; + + protected final Configuration configuration; + + public DefaultMigrationContext(final DataSourceConfiguration sourceDataSourceConfiguration, + final DataSourceConfiguration targetDataSourceConfiguration, + final DataRepositoryFactory dataRepositoryFactory, + final Configuration configuration) throws Exception { + this.dataSourceRepository = dataRepositoryFactory.create(sourceDataSourceConfiguration); + this.dataTargetRepository = dataRepositoryFactory.create(targetDataSourceConfiguration); + this.configuration = configuration; + ensureDefaultLocale(configuration); + } + + private void ensureDefaultLocale(Configuration configuration) { + String localeProperty = configuration.getString(CommercedbsyncConstants.MIGRATION_LOCALE_DEFAULT); + Locale locale = Locale.forLanguageTag(localeProperty); + Locale.setDefault(locale); + } + + + @Override + public DataRepository getDataSourceRepository() { + return dataSourceRepository; + } + + @Override + public DataRepository getDataTargetRepository() { + return dataTargetRepository; + } + + @Override + public boolean isMigrationTriggeredByUpdateProcess() { + return getBooleanProperty(CommercedbsyncConstants.MIGRATION_TRIGGER_UPDATESYSTEM); + } + + @Override + public boolean isSchemaMigrationEnabled() { + return getBooleanProperty(CommercedbsyncConstants.MIGRATION_SCHEMA_ENABLED); + } + + @Override + public boolean isAddMissingTablesToSchemaEnabled() { + return getBooleanProperty(CommercedbsyncConstants.MIGRATION_SCHEMA_TARGET_TABLES_ADD_ENABLED); + } + + @Override + public boolean isRemoveMissingTablesToSchemaEnabled() { + return getBooleanProperty(CommercedbsyncConstants.MIGRATION_SCHEMA_TARGET_TABLES_REMOVE_ENABLED); + } + + @Override + public boolean isAddMissingColumnsToSchemaEnabled() { + return getBooleanProperty(CommercedbsyncConstants.MIGRATION_SCHEMA_TARGET_COLUMNS_ADD_ENABLED); + } + + @Override + public boolean isRemoveMissingColumnsToSchemaEnabled() { + return getBooleanProperty(CommercedbsyncConstants.MIGRATION_SCHEMA_TARGET_COLUMNS_REMOVE_ENABLED); + } + + @Override + public boolean isSchemaMigrationAutoTriggerEnabled() { + return getBooleanProperty(CommercedbsyncConstants.MIGRATION_SCHEMA_AUTOTRIGGER_ENABLED); + } + + @Override + public int getReaderBatchSize() { + return getNumericProperty(CommercedbsyncConstants.MIGRATION_DATA_READER_BATCHSIZE); + } + + @Override + public boolean isTruncateEnabled() { + return getBooleanProperty(CommercedbsyncConstants.MIGRATION_DATA_TRUNCATE_ENABLED); + } + + @Override + public boolean isAuditTableMigrationEnabled() { + return getBooleanProperty(CommercedbsyncConstants.MIGRATION_DATA_TABLES_AUDIT_ENABLED); + } + + @Override + public Set getTruncateExcludedTables() { + return getListProperty(CommercedbsyncConstants.MIGRATION_DATA_TRUNCATE_EXCLUDED); + } + + @Override + public int getMaxParallelReaderWorkers() { + return getNumericProperty(CommercedbsyncConstants.MIGRATION_DATA_WORKERS_READER_MAXTASKS); + } + + @Override + public int getMaxParallelWriterWorkers() { + return getNumericProperty(CommercedbsyncConstants.MIGRATION_DATA_WORKERS_WRITER_MAXTASKS); + } + + @Override + public int getMaxWorkerRetryAttempts() { + return getNumericProperty(CommercedbsyncConstants.MIGRATION_DATA_WORKERS_RETRYATTEMPTS); + } + + + @Override + public int getMaxParallelTableCopy() { + return getNumericProperty(CommercedbsyncConstants.MIGRATION_DATA_MAXPRALLELTABLECOPY); + } + + @Override + public boolean isFailOnErrorEnabled() { + return getBooleanProperty(CommercedbsyncConstants.MIGRATION_DATA_FAILONEERROR_ENABLED); + } + + @Override + public Map> getExcludedColumns() { + return getDynamicPropertyKeys(CommercedbsyncConstants.MIGRATION_DATA_COLUMNS_EXCLUDED); + } + + public Map> getNullifyColumns() { + return getDynamicPropertyKeys(CommercedbsyncConstants.MIGRATION_DATA_COLUMNS_NULLIFY); + } + + @Override + public Set getCustomTables() { + return getListProperty(CommercedbsyncConstants.MIGRATION_DATA_TABLES_CUSTOM); + } + + @Override + public Set getExcludedTables() { + return getListProperty(CommercedbsyncConstants.MIGRATION_DATA_TABLES_EXCLUDED); + } + + @Override + public Set getIncludedTables() { + return getListProperty(CommercedbsyncConstants.MIGRATION_DATA_TABLES_INCLUDED); + } + + @Override + public boolean isDropAllIndexesEnabled() { + return getBooleanProperty(CommercedbsyncConstants.MIGRATION_DATA_INDICES_DROP_ENABLED); + } + + @Override + public boolean isDisableAllIndexesEnabled() { + return getBooleanProperty(CommercedbsyncConstants.MIGRATION_DATA_INDICES_DISABLE_ENABLED); + } + + @Override + public Set getDisableAllIndexesIncludedTables() { + return getListProperty(CommercedbsyncConstants.MIGRATION_DATA_INDICES_DISABLE_INCLUDED); + } + + + @Override + public boolean isClusterMode() { + return getBooleanProperty(CommercedbsyncConstants.MIGRATION_CLUSTER_ENABLED); + } + + @Override + public boolean isIncrementalModeEnabled() { + return getBooleanProperty(CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_ENABLED); + } + + @Override + public Set getIncrementalTables() { + return getListProperty(CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_TABLES); + } + + @Override + public Instant getIncrementalTimestamp() { + String timeStamp = getStringProperty(CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_TIMESTAMP); + if (StringUtils.isEmpty(timeStamp)) { + return null; + } + return ZonedDateTime.parse(timeStamp, DateTimeFormatter.ISO_ZONED_DATE_TIME).toInstant(); + } + + @Override + public boolean isBulkCopyEnabled() { + return getBooleanProperty(CommercedbsyncConstants.MIGRATION_DATA_BULKCOPY_ENABLED); + } + + @Override + public int getDataPipeTimeout() { + return getNumericProperty(CommercedbsyncConstants.MIGRATION_DATA_PIPE_TIMEOUT); + } + + @Override + public int getDataPipeCapacity() { + return getNumericProperty(CommercedbsyncConstants.MIGRATION_DATA_PIPE_CAPACITY); + } + + @Override + public String getMigrationReportConnectionString() { + return getStringProperty(CommercedbsyncConstants.MIGRATION_DATA_REPORT_CONNECTIONSTRING); + } + + @Override + public int getMaxTargetStagedMigrations() { + return getNumericProperty(CommercedbsyncConstants.MIGRATION_TARGET_MAX_STAGE_MIGRATIONS); + } + + @Override + public boolean isDeletionEnabled() { + return this.deletionEnabled; + } + + @Override + public boolean isLpTableMigrationEnabled() { + return this.lpTableMigrationEnabled; + } + + @Override + public void refreshSelf() { + + } + + @Override + public int getStalledTimeout() { + return getNumericProperty(CommercedbsyncConstants.MIGRATION_STALLED_TIMEOUT); + } + + protected boolean getBooleanProperty(final String key) { + return configuration.getBoolean(key); + } + + protected int getNumericProperty(final String key) { + return configuration.getInt(key); + } + + protected String getStringProperty(final String key) { + return configuration.getString(key); + } + + private Set getListProperty(final String key) { + final String tables = configuration.getString(key); + + if (StringUtils.isEmpty(tables)) { + return Collections.emptySet(); + } + + final Set result = new TreeSet<>(String.CASE_INSENSITIVE_ORDER); + final String[] tablesArray = tables.split(","); + result.addAll(Arrays.stream(tablesArray).collect(Collectors.toSet())); + + return result; + } + + private Map> getDynamicPropertyKeys(final String key) { + final Map> map = new TreeMap<>(String.CASE_INSENSITIVE_ORDER); + final Configuration subset = configuration.subset(key); + final Iterator keys = subset.getKeys(); + while (keys.hasNext()) { + final String current = keys.next(); + map.put(current, getListProperty(key + "." + current)); + } + return map; + } + + private Map getDynamicPropertyKeysValue(final String key) { + final Map map = new TreeMap<>(String.CASE_INSENSITIVE_ORDER); + final Configuration subset = configuration.subset(key); + final Iterator keys = subset.getKeys(); + + while (keys.hasNext()) { + final String current = keys.next(); + final String params = configuration.getString(key + "." + current); + final String[] paramsArray = params.split(","); + map.put(current, paramsArray); + } + return map; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/validation/MigrationContextValidator.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/validation/MigrationContextValidator.java new file mode 100644 index 0000000..474e64c --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/validation/MigrationContextValidator.java @@ -0,0 +1,15 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.context.validation; + +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; + +public interface MigrationContextValidator { + + void validateContext(MigrationContext context); + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/validation/impl/DefaultMigrationContextValidator.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/validation/impl/DefaultMigrationContextValidator.java new file mode 100644 index 0000000..d52c71d --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/context/validation/impl/DefaultMigrationContextValidator.java @@ -0,0 +1,52 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.context.validation.impl; + +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.context.validation.MigrationContextValidator; +import de.hybris.platform.servicelayer.config.ConfigurationService; +import org.apache.commons.lang.StringUtils; + +import java.util.Locale; + +public class DefaultMigrationContextValidator implements MigrationContextValidator { + + private static final String DB_URL_PROPERTY_KEY = "migration.ds.target.db.url"; + private static final String DISABLE_UNLOCKING = "system.unlocking.disabled"; + private ConfigurationService configurationService; + + @Override + public void validateContext(final MigrationContext context) { + // Canonically the target should always be the CCV2 DB and we have to verify nobody is trying to copy *from* that + final String sourceDbUrl = context.getDataSourceRepository().getDataSourceConfiguration().getConnectionString(); + final String ccv2ManagedDB = getConfigurationService().getConfiguration().getString(DB_URL_PROPERTY_KEY); + final boolean isSystemLocked = getConfigurationService().getConfiguration().getBoolean(DISABLE_UNLOCKING); + + if (sourceDbUrl.equals(ccv2ManagedDB)) { + throw new RuntimeException("Invalid data source configuration - cannot use the CCV2-managed database as the source."); + } + + if (isSystemLocked) { + throw new RuntimeException("You cannot run the migration on locked system. Check property " + DISABLE_UNLOCKING); + } + + //we check this for locale related comparison + Locale defaultLocale = Locale.getDefault(); + if (defaultLocale == null || StringUtils.isEmpty(defaultLocale.toString())) { + throw new RuntimeException("There is no default locale specified on the running server. Set the default locale and try again."); + } + } + + public ConfigurationService getConfigurationService() { + return configurationService; + } + + public void setConfigurationService(ConfigurationService configurationService) { + this.configurationService = configurationService; + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/cron/FullMigrationCronJob.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/cron/FullMigrationCronJob.java new file mode 100644 index 0000000..a525ff2 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/cron/FullMigrationCronJob.java @@ -0,0 +1,32 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.cron; + +import de.hybris.platform.jalo.Item; +import de.hybris.platform.jalo.JaloBusinessException; +import de.hybris.platform.jalo.SessionContext; +import de.hybris.platform.jalo.type.ComposedType; +import org.apache.log4j.Logger; +import com.sap.cx.boosters.commercedbsync.cron.GeneratedFullMigrationCronJob; + +public class FullMigrationCronJob extends GeneratedFullMigrationCronJob +{ + @SuppressWarnings("unused") + private static final Logger LOG = Logger.getLogger( FullMigrationCronJob.class.getName() ); + + @Override + protected Item createItem(final SessionContext ctx, final ComposedType type, final Item.ItemAttributeMap allAttributes) throws JaloBusinessException + { + // business code placed here will be executed before the item is created + // then create the item + final Item item = super.createItem( ctx, type, allAttributes ); + // business code placed here will be executed after the item was created + // and return the item + return item; + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/cron/IncrementalMigrationCronJob.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/cron/IncrementalMigrationCronJob.java new file mode 100644 index 0000000..96e3094 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/cron/IncrementalMigrationCronJob.java @@ -0,0 +1,32 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.cron; + +import de.hybris.platform.jalo.Item; +import de.hybris.platform.jalo.JaloBusinessException; +import de.hybris.platform.jalo.SessionContext; +import de.hybris.platform.jalo.type.ComposedType; +import org.apache.log4j.Logger; +import com.sap.cx.boosters.commercedbsync.cron.GeneratedIncrementalMigrationCronJob; + +public class IncrementalMigrationCronJob extends GeneratedIncrementalMigrationCronJob +{ + @SuppressWarnings("unused") + private static final Logger LOG = Logger.getLogger( IncrementalMigrationCronJob.class.getName() ); + + @Override + protected Item createItem(final SessionContext ctx, final ComposedType type, final ItemAttributeMap allAttributes) throws JaloBusinessException + { + // business code placed here will be executed before the item is created + // then create the item + final Item item = super.createItem( ctx, type, allAttributes ); + // business code placed here will be executed after the item was created + // and return the item + return item; + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/cron/MigrationCronJob.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/cron/MigrationCronJob.java new file mode 100644 index 0000000..35e7814 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/cron/MigrationCronJob.java @@ -0,0 +1,32 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.cron; + +import de.hybris.platform.jalo.Item; +import de.hybris.platform.jalo.JaloBusinessException; +import de.hybris.platform.jalo.SessionContext; +import de.hybris.platform.jalo.type.ComposedType; +import org.apache.log4j.Logger; +import com.sap.cx.boosters.commercedbsync.cron.GeneratedMigrationCronJob; + +public class MigrationCronJob extends GeneratedMigrationCronJob +{ + @SuppressWarnings("unused") + private static final Logger LOG = Logger.getLogger( MigrationCronJob.class.getName() ); + + @Override + protected Item createItem(final SessionContext ctx, final ComposedType type, final ItemAttributeMap allAttributes) throws JaloBusinessException + { + // business code placed here will be executed before the item is created + // then create the item + final Item item = super.createItem( ctx, type, allAttributes ); + // business code placed here will be executed after the item was created + // and return the item + return item; + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/dataset/DataColumn.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/dataset/DataColumn.java new file mode 100644 index 0000000..9c7a165 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/dataset/DataColumn.java @@ -0,0 +1,19 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.dataset; + +public interface DataColumn { + + String getColumnName(); + + int getColumnType(); + + int getPrecision(); + + int getScale(); + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/dataset/DataSet.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/dataset/DataSet.java new file mode 100644 index 0000000..829be66 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/dataset/DataSet.java @@ -0,0 +1,32 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.dataset; + +import com.microsoft.sqlserver.jdbc.ISQLServerBulkData; +import com.sap.cx.boosters.commercedbsync.dataset.impl.DefaultDataSet; + +import java.util.Collections; +import java.util.List; + +public interface DataSet { + + DataSet EMPTY = new DefaultDataSet(0, Collections.EMPTY_LIST, Collections.EMPTY_LIST); + + int getColumnCount(); + + List> getAllResults(); + + Object getColumnValue(String column, List row); + + Object getColumnValueForPostGres(String columnName, List row, DataColumn sourceColumnType, int targetColumnType); + + boolean isNotEmpty(); + + boolean hasColumn(String column); + + ISQLServerBulkData toSQLServerBulkData(); +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/dataset/impl/BulkDataSet.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/dataset/impl/BulkDataSet.java new file mode 100644 index 0000000..d91d1f8 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/dataset/impl/BulkDataSet.java @@ -0,0 +1,77 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.dataset.impl; + +import com.microsoft.sqlserver.jdbc.ISQLServerBulkData; +import org.apache.logging.log4j.util.Strings; +import com.sap.cx.boosters.commercedbsync.dataset.DataColumn; + +import java.sql.SQLException; +import java.sql.Types; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.stream.Collectors; +import java.util.stream.IntStream; + +public class BulkDataSet extends DefaultDataSet implements ISQLServerBulkData { + + private final Map typeMap = new HashMap<>(); + private int pointer = -1; + private Set columnOrdinals; + + public BulkDataSet(int columnCount, List columnOrder, List> result) { + super(columnCount, columnOrder, result); + this.columnOrdinals = IntStream.range(1, columnOrder.size() + 1).boxed().collect(Collectors.toSet()); + this.typeMap.put(Types.BLOB, new DefaultDataColumn(Strings.EMPTY, Types.LONGVARBINARY, 0x7FFFFFFF, 0)); + } + + @Override + public Set getColumnOrdinals() { + return columnOrdinals; + } + + @Override + public String getColumnName(int i) { + return getColumnOrder().get(i - 1).getColumnName(); + } + + @Override + public int getColumnType(int i) { + return mapColumn(getColumnOrder().get(i - 1)).getColumnType(); + } + + @Override + public int getPrecision(int i) { + return mapColumn(getColumnOrder().get(i - 1)).getPrecision(); + } + + @Override + public int getScale(int i) { + return mapColumn(getColumnOrder().get(i - 1)).getScale(); + } + + @Override + public Object[] getRowData() throws SQLException { + return getAllResults().get(pointer).toArray(); + } + + @Override + public boolean next() throws SQLException { + pointer++; + return getAllResults().size() > pointer; + } + + private DataColumn mapColumn(DataColumn column) { + if (typeMap.containsKey(column.getColumnType())) { + return typeMap.get(column.getColumnType()); + } + return column; + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/dataset/impl/DefaultDataColumn.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/dataset/impl/DefaultDataColumn.java new file mode 100644 index 0000000..0b05bc0 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/dataset/impl/DefaultDataColumn.java @@ -0,0 +1,44 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.dataset.impl; + +import com.sap.cx.boosters.commercedbsync.dataset.DataColumn; + +public class DefaultDataColumn implements DataColumn { + + private final String name; + private final int type; + private final int precision; + private final int scale; + + public DefaultDataColumn(String name, int type, int precision, int scale) { + this.name = name; + this.type = type; + this.precision = precision; + this.scale = scale; + } + + @Override + public String getColumnName() { + return name; + } + + @Override + public int getColumnType() { + return type; + } + + @Override + public int getPrecision() { + return precision; + } + + @Override + public int getScale() { + return scale; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/dataset/impl/DefaultDataSet.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/dataset/impl/DefaultDataSet.java new file mode 100644 index 0000000..1d044f5 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/dataset/impl/DefaultDataSet.java @@ -0,0 +1,131 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.dataset.impl; + +import com.github.freva.asciitable.AsciiTable; +import com.microsoft.sqlserver.jdbc.ISQLServerBulkData; +import org.apache.commons.lang3.ObjectUtils; +import org.apache.commons.lang3.StringUtils; +import com.sap.cx.boosters.commercedbsync.dataset.DataColumn; +import com.sap.cx.boosters.commercedbsync.dataset.DataSet; + +import javax.annotation.concurrent.Immutable; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; +import java.util.stream.IntStream; + +@Immutable +public class DefaultDataSet implements DataSet { + + private final int columnCount; + private final List columnOrder; + private final List> result; + + public DefaultDataSet(int columnCount, List columnOrder, List> result) { + this.columnCount = columnCount; + // TODO REVIEW Downgraded from Java8 to Java11 + this.columnOrder = Collections.unmodifiableList(columnOrder); + this.result = Collections.unmodifiableList(result.stream().map(Collections::unmodifiableList).collect(Collectors.toList())); + } + + @Override + public int getColumnCount() { + return columnCount; + } + + @Override + public List> getAllResults() { + return result; + } + + @Override + public Object getColumnValue(String columnName, List row) { + if (columnName == null || !hasColumn(columnName)) { + throw new IllegalArgumentException(String.format("Column %s is not part of the result", columnName)); + } + int idx = IntStream.range(0, columnOrder.size()).filter(i -> columnName.equalsIgnoreCase(columnOrder.get(i).getColumnName())).findFirst().getAsInt(); + return row.get(idx); + } + + @Override + public Object getColumnValueForPostGres(String columnName, List row, DataColumn sourceColumnType, int targetColumnType) { + if (columnName == null || !hasColumn(columnName)) { + throw new IllegalArgumentException(String.format("Column %s is not part of the result", columnName)); + } + int idx = IntStream.range(0, columnOrder.size()).filter(i -> columnName.equalsIgnoreCase(columnOrder.get(i).getColumnName())).findFirst().getAsInt(); + Object columnValue = row.get(idx); + if(ObjectUtils.isNotEmpty(columnValue)){ + switch (sourceColumnType.getColumnType()) { + case 1: + if (sourceColumnType.getPrecision() == 4 && targetColumnType == 5) { + if (columnValue instanceof String && ((String) columnValue).trim().length() == 1) { + columnValue = (int) (((String) columnValue).trim().charAt(0)); + } + } + break; + default: + break; + + } + } + return columnValue; + } + + public Object getColumnValueForHANA(String columnName, List row, DataColumn sourceColumnType, int targetColumnType) { + if (columnName == null || !hasColumn(columnName)) { + throw new IllegalArgumentException(String.format("Column %s is not part of the result", columnName)); + } + int idx = IntStream.range(0, columnOrder.size()).filter(i -> columnName.equalsIgnoreCase(columnOrder.get(i).getColumnName())).findFirst().getAsInt(); + Object columnValue = row.get(idx); + if(ObjectUtils.isNotEmpty(columnValue)){ + switch (sourceColumnType.getColumnType()) { + case 1: + if (sourceColumnType.getPrecision() == 4 && targetColumnType == 5) { + if (columnValue instanceof String && ((String) columnValue).trim().length() == 1) { + columnValue = (int) (((String) columnValue).trim().charAt(0)); + } + } + break; + + default: + break; + } + } + return columnValue; + } + + @Override + public boolean isNotEmpty() { + return getAllResults() != null && getAllResults().size() > 0; + } + + @Override + public boolean hasColumn(String column) { + if (StringUtils.isEmpty(column)) { + return false; + } + return columnOrder.stream().map(DataColumn::getColumnName).anyMatch(column::equalsIgnoreCase); + } + + public String toString() { + String[] headers = columnOrder.stream().map(DataColumn::getColumnName).toArray(String[]::new); + String[][] data = getAllResults().stream() + .map(l -> l.stream().map(v -> String.valueOf(v)).toArray(String[]::new)) + .toArray(String[][]::new); + return AsciiTable.getTable(headers, data); + } + + public List getColumnOrder() { + return columnOrder; + } + + @Override + public ISQLServerBulkData toSQLServerBulkData() { + return new BulkDataSet(columnCount, columnOrder, result); + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/datasource/MigrationDataSourceFactory.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/datasource/MigrationDataSourceFactory.java new file mode 100644 index 0000000..62fb732 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/datasource/MigrationDataSourceFactory.java @@ -0,0 +1,18 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.datasource; + +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; + +import javax.sql.DataSource; + +/** + * Factory to create the DataSources used for Migration + */ +public interface MigrationDataSourceFactory { + DataSource create(DataSourceConfiguration dataSourceConfiguration); +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/datasource/impl/AbstractMigrationDataSourceFactory.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/datasource/impl/AbstractMigrationDataSourceFactory.java new file mode 100644 index 0000000..6d8da5f --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/datasource/impl/AbstractMigrationDataSourceFactory.java @@ -0,0 +1,16 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.datasource.impl; + +import com.sap.cx.boosters.commercedbsync.datasource.MigrationDataSourceFactory; + +/** + * + */ +public abstract class AbstractMigrationDataSourceFactory implements MigrationDataSourceFactory { + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/datasource/impl/DefaultMigrationDataSourceFactory.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/datasource/impl/DefaultMigrationDataSourceFactory.java new file mode 100644 index 0000000..1e7f4a7 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/datasource/impl/DefaultMigrationDataSourceFactory.java @@ -0,0 +1,31 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.datasource.impl; + +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import com.zaxxer.hikari.HikariConfig; +import com.zaxxer.hikari.HikariDataSource; + +import javax.sql.DataSource; + +public class DefaultMigrationDataSourceFactory extends AbstractMigrationDataSourceFactory { + + //TODO: resource leak: DataSources are never closed + @Override + public DataSource create(DataSourceConfiguration dataSourceConfiguration) { + HikariConfig config = new HikariConfig(); + config.setJdbcUrl(dataSourceConfiguration.getConnectionString()); + config.setDriverClassName(dataSourceConfiguration.getDriver()); + config.setUsername(dataSourceConfiguration.getUserName()); + config.setPassword(dataSourceConfiguration.getPassword()); + config.setMaximumPoolSize(dataSourceConfiguration.getMaxActive()); + config.setMinimumIdle(dataSourceConfiguration.getMinIdle()); + config.setRegisterMbeans(true); + return new HikariDataSource(config); + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/events/CopyCompleteEvent.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/events/CopyCompleteEvent.java new file mode 100644 index 0000000..fc26fe8 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/events/CopyCompleteEvent.java @@ -0,0 +1,23 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.events; + +/** + * * ClusterAwareEvent to signal completion of the assigned copy ta + */ +public class CopyCompleteEvent extends CopyEvent { + + private Boolean copyResult = false; + + public CopyCompleteEvent(final Integer sourceNodeId, final String migrationId) { + super(sourceNodeId, migrationId); + } + + public Boolean getCopyResult() { + return copyResult; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/events/CopyDatabaseTableEvent.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/events/CopyDatabaseTableEvent.java new file mode 100644 index 0000000..b336fb4 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/events/CopyDatabaseTableEvent.java @@ -0,0 +1,15 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.events; + +/** + * Cluster Event to notify a Cluster to start the copy process + */ +public class CopyDatabaseTableEvent extends CopyEvent { + public CopyDatabaseTableEvent(final Integer sourceNodeId, final String migrationId) { + super(sourceNodeId, migrationId); + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/events/CopyEvent.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/events/CopyEvent.java new file mode 100644 index 0000000..7408e16 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/events/CopyEvent.java @@ -0,0 +1,48 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.events; + +import de.hybris.platform.servicelayer.event.ClusterAwareEvent; +import de.hybris.platform.servicelayer.event.PublishEventContext; +import de.hybris.platform.servicelayer.event.events.AbstractEvent; + + +/** + * ClusterAwareEvent to notify other Nodes to start the migration + */ +public abstract class CopyEvent extends AbstractEvent implements ClusterAwareEvent { + + private final int sourceNodeId; + + private final String migrationId; + + public CopyEvent(final int sourceNodeId, final String migrationId) { + super(); + this.sourceNodeId = sourceNodeId; + this.migrationId = migrationId; + } + + @Override + public boolean canPublish(PublishEventContext publishEventContext) { + return true; + } + + /** + * @return the masterNodeId + */ + public int getSourceNodeId() { + return sourceNodeId; + } + + + /** + * @return the migrationId + */ + public String getMigrationId() { + return migrationId; + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/events/handlers/CopyCompleteEventListener.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/events/handlers/CopyCompleteEventListener.java new file mode 100644 index 0000000..766b064 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/events/handlers/CopyCompleteEventListener.java @@ -0,0 +1,112 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.events.handlers; + +import com.sap.cx.boosters.commercedbsync.events.CopyCompleteEvent; +import de.hybris.platform.servicelayer.event.impl.AbstractEventListener; +import de.hybris.platform.tx.Transaction; +import de.hybris.platform.tx.TransactionBody; +import com.sap.cx.boosters.commercedbsync.MigrationProgress; +import com.sap.cx.boosters.commercedbsync.MigrationStatus; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceProfiler; +import com.sap.cx.boosters.commercedbsync.processors.MigrationPostProcessor; +import com.sap.cx.boosters.commercedbsync.service.DatabaseCopyTaskRepository; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.ArrayList; +import java.util.HashSet; + +/** + * Receives an Event when a node has completed Copying Data Tasks + */ +public class CopyCompleteEventListener extends AbstractEventListener { + private static final Logger LOG = LoggerFactory.getLogger(CopyCompleteEventListener.class.getName()); + + private MigrationContext migrationContext; + + private DatabaseCopyTaskRepository databaseCopyTaskRepository; + + private PerformanceProfiler performanceProfiler; + + private ArrayList postProcessors; + + @Override + protected void onEvent(final CopyCompleteEvent event) { + final String migrationId = event.getMigrationId(); + + LOG.info("Migration finished on Node " + event.getSourceNodeId() + " with result " + event.getCopyResult()); + final CopyContext copyContext = new CopyContext(migrationId, migrationContext, new HashSet<>(), + performanceProfiler); + + executePostProcessors(copyContext); + } + + /** + * Runs through all the Post Processors in a transaction to avoid multiple executions + * + * @param copyContext + */ + private void executePostProcessors(final CopyContext copyContext) { + try { + Transaction.current().execute(new TransactionBody() { + @Override + public Object execute() throws Exception { + + final MigrationStatus status = databaseCopyTaskRepository.getMigrationStatus(copyContext); + + // ORACLE_TARGET -- START + if (status.isFailed()) { + // return null; + LOG.error("Status FAILED"); + } + // ORACLE_TARGET -- END + + LOG.debug("Starting PostProcessor execution"); + + // ORACLE_TARGET -- START + if ((status.getStatus() == MigrationProgress.PROCESSED) + || (status.getStatus() == MigrationProgress.ABORTED)) { + postProcessors.forEach(p -> p.process(copyContext)); + } + // ORACLE_TARGET -- END + LOG.debug("Finishing PostProcessor execution"); + + databaseCopyTaskRepository.setMigrationStatus(copyContext, MigrationProgress.PROCESSED, + MigrationProgress.COMPLETED); + return null; + } + }); + } catch (final Exception e) { + if (e instanceof RuntimeException) { + LOG.error("Error during PostProcessor execution", e); + throw (RuntimeException) e; + } else { + LOG.error("Error during PostProcessor execution", e); + throw new RuntimeException(e); + } + } + } + + public void setDatabaseCopyTaskRepository(final DatabaseCopyTaskRepository databaseCopyTaskRepository) { + this.databaseCopyTaskRepository = databaseCopyTaskRepository; + } + + public void setMigrationContext(final MigrationContext migrationContext) { + this.migrationContext = migrationContext; + } + + public void setPerformanceProfiler(final PerformanceProfiler performanceProfiler) { + this.performanceProfiler = performanceProfiler; + } + + public void setPostProcessors(final ArrayList postProcessors) { + this.postProcessors = postProcessors; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/events/handlers/CopyDatabaseTableEventListener.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/events/handlers/CopyDatabaseTableEventListener.java new file mode 100644 index 0000000..e7372df --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/events/handlers/CopyDatabaseTableEventListener.java @@ -0,0 +1,88 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.events.handlers; + +import com.sap.cx.boosters.commercedbsync.events.CopyDatabaseTableEvent; +import de.hybris.platform.servicelayer.cluster.ClusterService; +import de.hybris.platform.servicelayer.event.impl.AbstractEventListener; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceProfiler; +import com.sap.cx.boosters.commercedbsync.service.DatabaseCopyTask; +import com.sap.cx.boosters.commercedbsync.service.DatabaseCopyTaskRepository; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationCopyService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.slf4j.MDC; + +import java.util.HashSet; +import java.util.Set; +import java.util.stream.Collectors; + +import static com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants.MDC_CLUSTERID; +import static com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants.MDC_MIGRATIONID; + +/** + * Listener that starts the Migration Process on a given node + */ +public class CopyDatabaseTableEventListener extends AbstractEventListener { + private static final Logger LOG = LoggerFactory.getLogger(CopyDatabaseTableEventListener.class.getName()); + + private DatabaseMigrationCopyService databaseMigrationCopyService; + + private DatabaseCopyTaskRepository databaseCopyTaskRepository; + + private MigrationContext migrationContext; + + private PerformanceProfiler performanceProfiler; + + private ClusterService clusterService; + + + @Override + protected void onEvent(final CopyDatabaseTableEvent event) { + final String migrationId = event.getMigrationId(); + + LOG.debug("Starting Migration with Id {}", migrationId); + try (MDC.MDCCloseable ignored = MDC.putCloseable(MDC_MIGRATIONID, migrationId); + MDC.MDCCloseable ignored2 = MDC.putCloseable(MDC_CLUSTERID, String.valueOf(clusterService.getClusterId())) + ) { + CopyContext copyContext = new CopyContext(migrationId, migrationContext, new HashSet<>(), performanceProfiler); + Set copyTableTasks = databaseCopyTaskRepository.findPendingTasks(copyContext); + Set items = copyTableTasks.stream().map(task -> new CopyContext.DataCopyItem(task.getSourcetablename(), task.getTargettablename(), task.getColumnmap(), task.getSourcerowcount())).collect(Collectors.toSet()); + copyContext.getCopyItems().addAll(items); + databaseMigrationCopyService.copyAllAsync(copyContext); + + + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + + public void setDatabaseMigrationCopyService(final DatabaseMigrationCopyService databaseMigrationCopyService) { + this.databaseMigrationCopyService = databaseMigrationCopyService; + } + + public void setDatabaseCopyTaskRepository(final DatabaseCopyTaskRepository databaseCopyTaskRepository) { + this.databaseCopyTaskRepository = databaseCopyTaskRepository; + } + + public void setMigrationContext(final MigrationContext migrationContext) { + this.migrationContext = migrationContext; + } + + public void setPerformanceProfiler(final PerformanceProfiler performanceProfiler) { + this.performanceProfiler = performanceProfiler; + } + + @Override + public void setClusterService(ClusterService clusterService) { + super.setClusterService(clusterService); + this.clusterService = clusterService; + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/filter/DataCopyTableFilter.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/filter/DataCopyTableFilter.java new file mode 100644 index 0000000..6457cff --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/filter/DataCopyTableFilter.java @@ -0,0 +1,15 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.filter; + +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; + +import java.util.function.Predicate; + +public interface DataCopyTableFilter { + Predicate filter(MigrationContext context); +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/filter/impl/CompositeDataCopyTableFilter.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/filter/impl/CompositeDataCopyTableFilter.java new file mode 100644 index 0000000..0a883c6 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/filter/impl/CompositeDataCopyTableFilter.java @@ -0,0 +1,27 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.filter.impl; + +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.filter.DataCopyTableFilter; + +import java.util.List; +import java.util.function.Predicate; + +public class CompositeDataCopyTableFilter implements DataCopyTableFilter { + + private List filters; + + @Override + public Predicate filter(MigrationContext context) { + return p -> filters.stream().allMatch(f -> f.filter(context).test(p)); + } + + public void setFilters(List filters) { + this.filters = filters; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/filter/impl/ExclusionDataCopyTableFilter.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/filter/impl/ExclusionDataCopyTableFilter.java new file mode 100644 index 0000000..9d8062e --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/filter/impl/ExclusionDataCopyTableFilter.java @@ -0,0 +1,27 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.filter.impl; + +import com.google.common.base.Predicates; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.filter.DataCopyTableFilter; +import org.apache.commons.lang.StringUtils; + +import java.util.Set; +import java.util.function.Predicate; + +public class ExclusionDataCopyTableFilter implements DataCopyTableFilter { + + @Override + public Predicate filter(MigrationContext context) { + Set excludedTables = context.getExcludedTables(); + if (excludedTables == null || excludedTables.isEmpty()) { + return Predicates.alwaysTrue(); + } + return p -> excludedTables.stream().noneMatch(e -> StringUtils.equalsIgnoreCase(e, p)); + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/filter/impl/InclusionDataCopyTableFilter.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/filter/impl/InclusionDataCopyTableFilter.java new file mode 100644 index 0000000..4a3d02d --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/filter/impl/InclusionDataCopyTableFilter.java @@ -0,0 +1,28 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.filter.impl; + +import com.google.common.base.Predicates; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.filter.DataCopyTableFilter; +import org.apache.commons.lang.StringUtils; + +import java.util.Set; +import java.util.function.Predicate; + +public class InclusionDataCopyTableFilter implements DataCopyTableFilter { + + @Override + public Predicate filter(MigrationContext context) { + Set includedTables = context.getIncludedTables(); + if (includedTables == null || includedTables.isEmpty()) { + return Predicates.alwaysTrue(); + } + return p -> includedTables.stream().anyMatch(e -> StringUtils.equalsIgnoreCase(e, p)); + + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/filter/impl/IncrementalDataCopyTableFilter.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/filter/impl/IncrementalDataCopyTableFilter.java new file mode 100644 index 0000000..cf6ff4b --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/filter/impl/IncrementalDataCopyTableFilter.java @@ -0,0 +1,31 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.filter.impl; + +import com.google.common.base.Predicates; +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.filter.DataCopyTableFilter; +import org.apache.commons.lang.StringUtils; + +import java.util.Set; +import java.util.function.Predicate; + +public class IncrementalDataCopyTableFilter implements DataCopyTableFilter { + + @Override + public Predicate filter(MigrationContext context) { + if (!context.isIncrementalModeEnabled()) { + return Predicates.alwaysTrue(); + } + Set incrementalTables = context.getIncrementalTables(); + if (incrementalTables == null || incrementalTables.isEmpty()) { + throw new IllegalStateException("At least one table for incremental copy must be specified. Check property " + CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_TABLES); + } + return p -> incrementalTables.stream().anyMatch(e -> StringUtils.equalsIgnoreCase(e, p)); + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/interceptors/DefaultCMTRemoveInterceptor.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/interceptors/DefaultCMTRemoveInterceptor.java new file mode 100644 index 0000000..bc80218 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/interceptors/DefaultCMTRemoveInterceptor.java @@ -0,0 +1,117 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.interceptors; + +import com.google.common.base.Preconditions; +import com.google.common.base.Splitter; +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import de.hybris.platform.core.model.ItemModel; +import de.hybris.platform.servicelayer.exceptions.ModelSavingException; +import de.hybris.platform.servicelayer.interceptor.InterceptorContext; +import de.hybris.platform.servicelayer.interceptor.RemoveInterceptor; +import de.hybris.platform.servicelayer.model.ModelService; +import de.hybris.platform.servicelayer.type.TypeService; +import de.hybris.platform.util.Config; +import java.util.Collections; +import java.util.List; +import javax.annotation.Nonnull; +import org.apache.commons.lang.StringUtils; +import com.sap.cx.boosters.commercedbsync.enums.ItemChangeType; +import com.sap.cx.boosters.commercedbsync.model.ItemDeletionMarkerModel; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class DefaultCMTRemoveInterceptor implements RemoveInterceptor { + + private static final Logger LOG = LoggerFactory.getLogger(DefaultCMTRemoveInterceptor.class); + + private static final boolean deletionsEnabled = Config.getBoolean(CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_DELETIONS_ITEMTYPES_ENABLED,false); + + private static final String COMMA_SEPERATOR = ","; + + private ModelService modelService; + private TypeService typeService; + + @Override + public void onRemove(@Nonnull final ItemModel model, @Nonnull final InterceptorContext ctx) { + + if (!deletionsEnabled ) { + if (LOG.isDebugEnabled()) { + LOG.debug("CMT deletions is not enabled for ItemModel."); + } + return; + } + + List deletionsItemType = getListDeletionsItemType(); + + if ( deletionsItemType == null || deletionsItemType.isEmpty()) { + if (LOG.isDebugEnabled()) { + LOG.debug("No table defined to create a deletion record for CMT "); + } + return; + } + + if (deletionsItemType.contains(model.getItemtype().toLowerCase())) { + + ItemDeletionMarkerModel idm = null; + try { + if(LOG.isDebugEnabled()){ + LOG.info("inside remove DefaultCMTRemoveInterceptor for" + String + .valueOf(typeService.getComposedTypeForCode(model.getItemtype()).getTable())); + } + + idm = modelService.create(ItemDeletionMarkerModel.class); + fillInitialDeletionMarker(idm, model.getPk().getLong(), + typeService.getComposedTypeForCode(model.getItemtype()).getTable()); + modelService.save(idm); + + } catch (ModelSavingException ex) { + LOG.error("Exception during save for CMT table {} , PK : {} ", model.getItemtype(), + model.getPk()); + } + } else { + if (LOG.isDebugEnabled()) { + LOG.debug("Table {} not defined for CMT deletion record", model.getItemtype()); + } + } + } + + private void fillInitialDeletionMarker(final ItemDeletionMarkerModel marker, final Long itemPK, + final String table) { + Preconditions.checkNotNull(marker, "ItemDeletionMarker cannot be null in this place"); + Preconditions + .checkArgument(marker.getItemModelContext().isNew(), "ItemDeletionMarker must be new"); + + marker.setItemPK(itemPK); + marker.setTable(table); + marker.setChangeType(ItemChangeType.DELETED); + } + + private List getListDeletionsItemType() { + // TO DO change to static variable + final String itemTypes = Config.getString( + CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_DELETIONS_ITEMTYPES, ""); + if (StringUtils.isEmpty(itemTypes)) { + return Collections.emptyList(); + } + List result = Splitter.on(COMMA_SEPERATOR) + .omitEmptyStrings() + .trimResults() + .splitToList(itemTypes.toLowerCase()); + + return result; + } + + public void setModelService(final ModelService modelService) { + this.modelService = modelService; + } + + public void setTypeService(final TypeService typeService) { + this.typeService = typeService; + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/jalo/ItemDeletionMarker.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/jalo/ItemDeletionMarker.java new file mode 100644 index 0000000..34243c3 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/jalo/ItemDeletionMarker.java @@ -0,0 +1,32 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.jalo; + +import de.hybris.platform.jalo.Item; +import de.hybris.platform.jalo.JaloBusinessException; +import de.hybris.platform.jalo.SessionContext; +import de.hybris.platform.jalo.type.ComposedType; +import org.apache.log4j.Logger; +import com.sap.cx.boosters.commercedbsync.jalo.GeneratedItemDeletionMarker; + +public class ItemDeletionMarker extends GeneratedItemDeletionMarker +{ + @SuppressWarnings("unused") + private static final Logger LOG = Logger.getLogger( ItemDeletionMarker.class.getName() ); + + @Override + protected Item createItem(final SessionContext ctx, final ComposedType type, final ItemAttributeMap allAttributes) throws JaloBusinessException + { + // business code placed here will be executed before the item is created + // then create the item + final Item item = super.createItem( ctx, type, allAttributes ); + // business code placed here will be executed after the item was created + // and return the item + return item; + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/jobs/AbstractMigrationJobPerformable.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/jobs/AbstractMigrationJobPerformable.java new file mode 100644 index 0000000..e8ceae6 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/jobs/AbstractMigrationJobPerformable.java @@ -0,0 +1,272 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.jobs; + +import com.sap.cx.boosters.commercedbsync.MigrationStatus; +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import de.hybris.platform.cronjob.enums.CronJobResult; +import de.hybris.platform.cronjob.enums.CronJobStatus; +import de.hybris.platform.cronjob.jalo.AbortCronJobException; +import de.hybris.platform.cronjob.model.CronJobModel; +import de.hybris.platform.servicelayer.cronjob.AbstractJobPerformable; +import de.hybris.platform.servicelayer.cronjob.CronJobService; +import de.hybris.platform.servicelayer.cronjob.PerformResult; +import de.hybris.platform.util.Config; +import org.apache.commons.lang.StringUtils; +import org.apache.commons.lang3.BooleanUtils; +import com.sap.cx.boosters.commercedbsync.context.IncrementalMigrationContext; +import com.sap.cx.boosters.commercedbsync.model.cron.FullMigrationCronJobModel; +import com.sap.cx.boosters.commercedbsync.model.cron.IncrementalMigrationCronJobModel; +import com.sap.cx.boosters.commercedbsync.repository.DataRepository; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.jdbc.core.JdbcTemplate; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.Statement; +import java.time.Instant; +import java.time.OffsetDateTime; +import java.time.ZoneOffset; +import java.util.Arrays; +import java.util.Set; + +public abstract class AbstractMigrationJobPerformable extends AbstractJobPerformable { + + private static final Logger LOG = LoggerFactory.getLogger(AbstractMigrationJobPerformable.class); + + private static final String[] TYPE_SYSTEM_RELATED_TYPES = new String[]{"atomictypes", "attributeDescriptors", "collectiontypes", "composedtypes", "enumerationvalues", "maptypes"}; + + private static final String MIGRATION_UPDATE_TYPE_SYSTEM = "migration.ds.update.typesystem.table"; + private static final String SOURCE_TYPESYSTEMNAME = "migration.ds.source.db.typesystemname"; + + private static final String SOURCE_TYPESYSTEMSUFFIX = "migration.ds.source.db.typesystemsuffix"; + + private static final String TYPESYSTEM_SELECT_STATEMENT = "IF (EXISTS (SELECT * \n" + + " FROM INFORMATION_SCHEMA.TABLES \n" + + " WHERE TABLE_SCHEMA = '%s' \n" + + " AND TABLE_NAME = '%2$s'))\n" + + "BEGIN\n" + + " select name from %2$s where state = 'current'\n" + + "END"; + + + protected DatabaseMigrationService databaseMigrationService; + protected IncrementalMigrationContext incrementalMigrationContext; + protected CronJobService cronJobService; + protected String currentMigrationId; + private JdbcTemplate jdbcTemplate; + + @Override + public boolean isPerformable() + { + for(CronJobModel cronJob : getCronJobService().getRunningOrRestartedCronJobs()){ + if ((cronJob instanceof IncrementalMigrationCronJobModel + || cronJob instanceof FullMigrationCronJobModel)) { + LOG.info("Previous migrations job already running {} and Type {} ", cronJob.getCode(), cronJob.getItemtype()); + return false; + } + } + return true; + } + + /* + * ORACLE_TARGET - START The updateTypesystemTabl() also updates the TS. There is scope to make these 2 update + * methods efficient i.e set the TS only once. + */ + + protected void updateSourceTypesystemProperty() throws Exception + { + // Disabling Post processor + Config.setParameter("migration.data.postprocessor.tscheck.disable", "yes"); + + if(BooleanUtils.isFalse(Config.getBoolean(MIGRATION_UPDATE_TYPE_SYSTEM, false))){ + return; + } + DataRepository sourceRepository = incrementalMigrationContext.getDataSourceRepository(); + try( + Connection connection = sourceRepository.getConnection(); + Statement stmt = connection.createStatement(); + ResultSet resultSet = stmt.executeQuery(String.format(TYPESYSTEM_SELECT_STATEMENT, + sourceRepository.getDataSourceConfiguration().getSchema(), "CCV2_TYPESYSTEM_MIGRATIONS")); + ) { + LOG.debug("SETTING the Type System from CCV2_TYPESYSTEM_MIGRATIONS" + String.format(TYPESYSTEM_SELECT_STATEMENT, + sourceRepository.getDataSourceConfiguration().getSchema(), "CCV2_TYPESYSTEM_MIGRATIONS")); + + String typeSystemName = null; + if (resultSet.next()) + { + typeSystemName = resultSet.getString("name"); + } + else + { + return; + } + if (typeSystemName != null && !typeSystemName.isEmpty()) + { + Config.setParameter(SOURCE_TYPESYSTEMNAME, typeSystemName); + LOG.info("SETTING typeSystemName = " + typeSystemName); + return; + } + } + } + protected void updateTypesystemTable(Set migrationItems) throws Exception { + + if(BooleanUtils.isFalse(Config.getBoolean(MIGRATION_UPDATE_TYPE_SYSTEM, false))){ + return; + } + DataRepository sourceRepository = incrementalMigrationContext.getDataSourceRepository(); + for(final String tableName: migrationItems){ + if(Arrays.stream(TYPE_SYSTEM_RELATED_TYPES).anyMatch(t -> StringUtils.startsWithIgnoreCase(tableName, t))) + { + try ( + Connection connection = sourceRepository.getConnection(); + Statement stmt = connection.createStatement(); + ResultSet resultSet = stmt.executeQuery(String.format(TYPESYSTEM_SELECT_STATEMENT, + sourceRepository.getDataSourceConfiguration().getSchema(),"CCV2_TYPESYSTEM_MIGRATIONS")); + ) + { + LOG.debug("Type System table - table found in list, get latest TS => " + String.format(TYPESYSTEM_SELECT_STATEMENT, + sourceRepository.getDataSourceConfiguration().getSchema(), "CCV2_TYPESYSTEM_MIGRATIONS")); + String typeSystemName = null; + if (resultSet.next()) { + typeSystemName = resultSet.getString("name");; + } else{ + return; + } + + final String tsBaseTableName = extractTSbaseTableName(tableName); + + LOG.info("Type System table - table found in list, get latest Table name " + String.format( + "SELECT TableName FROM %s WHERE Typecode IS NOT NULL AND TableName LIKE '%s' AND TypeSystemName = '%s'", + CommercedbsyncConstants.DEPLOYMENTS_TABLE, tsBaseTableName + "%", typeSystemName)); + final String typeSystemTablesQuery = String.format( + "SELECT TableName FROM %s WHERE Typecode IS NOT NULL AND TableName LIKE '%s' AND TypeSystemName = '%s'", + CommercedbsyncConstants.DEPLOYMENTS_TABLE, tsBaseTableName + "%", typeSystemName); + final ResultSet typeSystemtableresultSet = stmt.executeQuery(typeSystemTablesQuery); + String typeSystemTableName = null; + if (typeSystemtableresultSet.next()) + { + typeSystemTableName = typeSystemtableresultSet.getString("TableName"); + } + // ORACLE_TARGET - START, add null check and return; + if (typeSystemTableName != null) + { + Config.setParameter(SOURCE_TYPESYSTEMNAME, typeSystemName); + final String typesystemsuffix = typeSystemTableName.substring(tsBaseTableName.length()); + + Config.setParameter(SOURCE_TYPESYSTEMSUFFIX, typesystemsuffix); + LOG.info("typeSystemName = " + typeSystemName + ",typesystemsuffix = " + typesystemsuffix); + return; + } + } + } + } + } + + /* + * If enumerationvalueslp, then extract enumerationvalues as base table name. + */ + private String extractTSbaseTableName(final String tableNameFromMigrationItems) + { + String tsBaseTableName = tableNameFromMigrationItems; + + // if it ends with lp + if (tableNameFromMigrationItems.toLowerCase().endsWith("lp")) + { + tsBaseTableName = tableNameFromMigrationItems.substring(0, tableNameFromMigrationItems.length() - 2); + } + + return tsBaseTableName; + } + + protected MigrationStatus waitForFinishCronjobs(IncrementalMigrationContext context, String migrationID, + final CronJobModel cronJobModel) throws Exception { + MigrationStatus status; + Thread.sleep(5000); + boolean aborted = false; + long since = 0; + do { + OffsetDateTime sinceTime = OffsetDateTime.ofInstant(Instant.ofEpochMilli(since), ZoneOffset.UTC); + status = databaseMigrationService.getMigrationState(context, migrationID,sinceTime); + Thread.sleep(5000); + since = System.currentTimeMillis(); + if (isJobStateAborted(cronJobModel)) + { + aborted = true; + break; + } + } while (!status.isCompleted()); + + if (aborted) + { + LOG.info(" Aborted ...STOPPING migration "); + databaseMigrationService.stopMigration(incrementalMigrationContext, currentMigrationId); + LOG.error("Database migration has been ABORTED, Migration State= " + status + ", Total Tasks " + + status.getTotalTasks() + ", migration id =" + status.getMigrationID() + ", Completed Tasks " + + status.getCompletedTasks()); + clearAbortRequestedIfNeeded(cronJobModel); + throw new AbortCronJobException("CronJOB ABORTED"); + } + + if (status.isFailed()) { + LOG.error("Database migration FAILED, Migration State= " + status + ", Total Tasks " + + status.getTotalTasks() + ", migration id =" + status.getMigrationID() + ", Completed Tasks " + + status.getCompletedTasks()); + throw new Exception("Database migration failed"); + } + + return status; + } + + protected boolean isJobStateAborted(final CronJobModel cronJobModel) + { + this.modelService.refresh(cronJobModel); + LOG.info("cron job status = " + cronJobModel.getStatus()); + LOG.info("cron job request to abort =" + cronJobModel.getRequestAbort()); + return ((cronJobModel.getStatus() == CronJobStatus.ABORTED) + || (cronJobModel.getRequestAbort() == null ? false : cronJobModel.getRequestAbort())); + } + + @Override + public boolean isAbortable() { + return true; + } + + public IncrementalMigrationContext getIncrementalMigrationContext() { + return incrementalMigrationContext; + } + + public void setIncrementalMigrationContext(IncrementalMigrationContext incrementalMigrationContext) { + this.incrementalMigrationContext = incrementalMigrationContext; + } + + public CronJobService getCronJobService() { + return cronJobService; + } + + public void setCronJobService(CronJobService cronJobService) { + this.cronJobService = cronJobService; + } + + public DatabaseMigrationService getDatabaseMigrationService() { + return databaseMigrationService; + } + + public void setDatabaseMigrationService(DatabaseMigrationService databaseMigrationService) { + this.databaseMigrationService = databaseMigrationService; + } + + public JdbcTemplate getJdbcTemplate() { + return jdbcTemplate; + } + + public void setJdbcTemplate(JdbcTemplate jdbcTemplate) { + this.jdbcTemplate = jdbcTemplate; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/jobs/FullMigrationJob.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/jobs/FullMigrationJob.java new file mode 100644 index 0000000..4a00890 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/jobs/FullMigrationJob.java @@ -0,0 +1,76 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.jobs; + +import com.google.common.base.Preconditions; +import de.hybris.platform.cronjob.enums.CronJobResult; +import de.hybris.platform.cronjob.enums.CronJobStatus; +import de.hybris.platform.cronjob.jalo.AbortCronJobException; +import de.hybris.platform.cronjob.model.CronJobModel; +import de.hybris.platform.servicelayer.cronjob.PerformResult; +import com.sap.cx.boosters.commercedbsync.MigrationStatus; +import com.sap.cx.boosters.commercedbsync.model.cron.FullMigrationCronJobModel; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.time.Instant; +import java.time.OffsetDateTime; +import java.time.ZoneOffset; + + +/** + * This class offers functionality for FullMigrationJob. + */ +public class FullMigrationJob extends AbstractMigrationJobPerformable { + + private static final Logger LOG = LoggerFactory.getLogger(FullMigrationJob.class); + + @Override + public PerformResult perform(final CronJobModel cronJobModel) { + FullMigrationCronJobModel fullMigrationCronJobModel; + + Preconditions + .checkState((cronJobModel instanceof FullMigrationCronJobModel), + "cronJobModel must the instance of FullMigrationCronJobModel"); + fullMigrationCronJobModel = (FullMigrationCronJobModel) cronJobModel; + Preconditions.checkNotNull(fullMigrationCronJobModel.getMigrationItems(), + "We expect at least one table for the full migration"); + Preconditions.checkState( + null != fullMigrationCronJobModel.getMigrationItems() && !fullMigrationCronJobModel + .getMigrationItems().isEmpty(), + "We expect at least one table for the full migration"); + + boolean caughtExeption = false; + try { + incrementalMigrationContext + .setIncludedTables(fullMigrationCronJobModel.getMigrationItems()); + // ORACLE_TARGET - START there is scope to make the 2 update methods + // efficient + updateSourceTypesystemProperty(); + // ORACLE_TARGET - END there is scope to make the 2 methods + // efficient + updateTypesystemTable(fullMigrationCronJobModel.getMigrationItems()); + incrementalMigrationContext.setDeletionEnabled(false); + incrementalMigrationContext.setLpTableMigrationEnabled(false); + incrementalMigrationContext.setTruncateEnabled(fullMigrationCronJobModel.isTruncateEnabled()); + incrementalMigrationContext.setSchemaMigrationAutoTriggerEnabled(fullMigrationCronJobModel.isSchemaAutotrigger()); + incrementalMigrationContext.setIncrementalModeEnabled(false); + currentMigrationId = databaseMigrationService.startMigration(incrementalMigrationContext); + MigrationStatus currentState = waitForFinishCronjobs(incrementalMigrationContext, currentMigrationId,cronJobModel); + } + catch (final AbortCronJobException e) + { + return new PerformResult(CronJobResult.ERROR, CronJobStatus.ABORTED); + } + catch (final Exception e) + { + caughtExeption = true; + LOG.error(" Exception caught: message= " + e.getMessage(), e); + } + return new PerformResult(caughtExeption ? CronJobResult.FAILURE : CronJobResult.SUCCESS, CronJobStatus.FINISHED); + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/jobs/IncrementalMigrationJob.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/jobs/IncrementalMigrationJob.java new file mode 100644 index 0000000..cb03d3f --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/jobs/IncrementalMigrationJob.java @@ -0,0 +1,259 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.jobs; + +import com.google.common.base.Preconditions; +import com.google.common.base.Splitter; +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import de.hybris.platform.cronjob.enums.CronJobResult; +import de.hybris.platform.cronjob.enums.CronJobStatus; +import de.hybris.platform.cronjob.jalo.AbortCronJobException; +import de.hybris.platform.cronjob.model.CronJobModel; +import de.hybris.platform.jalo.type.TypeManager; +import de.hybris.platform.servicelayer.cronjob.PerformResult; +import de.hybris.platform.servicelayer.model.ModelService; +import de.hybris.platform.servicelayer.type.TypeService; +import de.hybris.platform.util.Config; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.Statement; +import java.time.Instant; +import java.util.*; +import java.util.stream.Collectors; +import javax.annotation.Resource; +import org.apache.commons.collections4.CollectionUtils; +import org.apache.commons.lang.StringUtils; +import com.sap.cx.boosters.commercedbsync.MigrationStatus; +import com.sap.cx.boosters.commercedbsync.model.cron.IncrementalMigrationCronJobModel; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + + +/** + * This class offers functionality for IncrementalMigrationJob. + */ +public class IncrementalMigrationJob extends AbstractMigrationJobPerformable { + + private static final Logger LOG = LoggerFactory.getLogger(IncrementalMigrationJob.class); + + private static final String LP_SUFFIX = "lp"; + + private static String tablePrefix = Config.getParameter("db.tableprefix") == null ? "" : Config.getParameter("db.tableprefix"); + + private static final String TABLE_EXISTS_SELECT_STATEMENT_MSSQL = "SELECT TABLE_NAME \n" + + " FROM INFORMATION_SCHEMA.TABLES \n" + + " WHERE TABLE_SCHEMA = '%s' \n" + + " AND TABLE_NAME = '%2$s'\n"; + private static final String TABLE_EXISTS_SELECT_STATEMENT_ORACLE = "SELECT TABLE_NAME \n" + " FROM dba_tables \n" + + " WHERE upper(owner) = upper('%s') \n" + " AND upper(table_name) = upper('%2$s') "; + + private static final String TABLE_EXISTS_SELECT_STATEMENT_HANA = "SELECT TABLE_NAME \n" + " FROM public.tables \n" + + " WHERE schema_name = upper('%s') \n" + " AND table_name = upper('%2$s') "; + + private static final String TABLE_EXISTS_SELECT_STATEMENT_POSTGRES = "SELECT TABLE_NAME \n" + " FROM public.tables \n" + + " WHERE schema_name = upper('%s') \n" + " AND table_name = upper('%2$s') "; + + + @Resource(name = "typeService") + private TypeService typeService; + + @Override + public PerformResult perform(final CronJobModel cronJobModel) { + IncrementalMigrationCronJobModel incrementalMigrationCronJob; + + Preconditions + .checkState((cronJobModel instanceof IncrementalMigrationCronJobModel), + "cronJobModel must the instance of FullMigrationCronJobModel"); + modelService.refresh(cronJobModel); + + incrementalMigrationCronJob = (IncrementalMigrationCronJobModel) cronJobModel; + Preconditions.checkState( + null != incrementalMigrationCronJob.getMigrationItems() && !incrementalMigrationCronJob + .getMigrationItems().isEmpty(), + "We expect at least one table for the incremental migration"); + final Set deletionTableSet = getDeletionTableSet(incrementalMigrationCronJob.getMigrationItems()); + MigrationStatus currentState; + String currentMigrationId; + boolean caughtExeption = false; + try { + + if (null != incrementalMigrationCronJob.getLastStartTime()) { + Instant timeStampInstant = incrementalMigrationCronJob.getLastStartTime().toInstant(); + LOG.info("For {} IncrementalTimestamp : {} ", incrementalMigrationCronJob.getCode(), + timeStampInstant); + incrementalMigrationContext.setIncrementalMigrationTimestamp(timeStampInstant); + } else { + LOG.error("IncrementalTimestamp is not set for Cronjobs : {} , Aborting the migration, and please set the *lastStartTime* before triggering" + + " ", incrementalMigrationCronJob.getCode()); + return new PerformResult(CronJobResult.ERROR, CronJobStatus.ABORTED); + } + incrementalMigrationContext.setIncrementalModeEnabled(true); + incrementalMigrationContext.setTruncateEnabled(Optional.ofNullable(incrementalMigrationCronJob.isTruncateEnabled()) + .map(e -> incrementalMigrationCronJob.isTruncateEnabled()) + .orElse(false)); + updateSourceTypesystemProperty(); + if (CollectionUtils.isNotEmpty(deletionTableSet) && isSchemaMigrationRequired(deletionTableSet)) { + // deletionTableSet.add(deletionTable); + LOG.info("Running Deletion incremental migration"); + incrementalMigrationContext.setSchemaMigrationAutoTriggerEnabled(false); + incrementalMigrationContext.setIncrementalTables(deletionTableSet); + incrementalMigrationContext.setDeletionEnabled(true); + incrementalMigrationContext.setLpTableMigrationEnabled(false); + currentMigrationId = databaseMigrationService.startMigration(incrementalMigrationContext); + currentState = databaseMigrationService.waitForFinish(this.incrementalMigrationContext, currentMigrationId); + } + + // Running incremental migration + Set tablesWithoutLp = incrementalMigrationCronJob.getMigrationItems().stream(). + filter(table-> !(StringUtils.endsWithIgnoreCase(table, LP_SUFFIX))).collect( + Collectors.toSet()); + if(CollectionUtils.isNotEmpty(tablesWithoutLp)){ + LOG.info("Running incremental migration for Non LP Table"); + incrementalMigrationContext.setDeletionEnabled(false); + incrementalMigrationContext.setLpTableMigrationEnabled(false); + incrementalMigrationContext.setIncrementalTables(tablesWithoutLp); + incrementalMigrationContext.setSchemaMigrationAutoTriggerEnabled(incrementalMigrationCronJob.isSchemaAutotrigger()); + currentMigrationId = databaseMigrationService.startMigration(incrementalMigrationContext); + currentState = waitForFinishCronjobs(incrementalMigrationContext, currentMigrationId,cronJobModel); + + } + // Running incremental migration for LP Table + Set tablesWithLp = incrementalMigrationCronJob.getMigrationItems().stream(). + filter(table-> StringUtils.endsWithIgnoreCase(table, LP_SUFFIX)).collect( + Collectors.toSet()); + if(CollectionUtils.isNotEmpty(tablesWithLp)){ + LOG.info("Running incremental migration for LP Table"); + incrementalMigrationContext.setDeletionEnabled(false); + incrementalMigrationContext.setLpTableMigrationEnabled(true); + incrementalMigrationContext.setIncrementalTables(tablesWithLp); + incrementalMigrationContext.setSchemaMigrationAutoTriggerEnabled(incrementalMigrationCronJob.isSchemaAutotrigger()); + currentMigrationId = databaseMigrationService.startMigration(incrementalMigrationContext); + currentState = waitForFinishCronjobs(incrementalMigrationContext, currentMigrationId,cronJobModel); + } + } + catch (final AbortCronJobException e) + { + caughtExeption = true; + return new PerformResult(CronJobResult.ERROR, CronJobStatus.ABORTED); + } + catch (final Exception e) { + caughtExeption = true; + LOG.error("Exception caught:", e); + } + if (!caughtExeption) { + incrementalMigrationCronJob.setLastStartTime(cronJobModel.getStartTime()); + modelService.save(cronJobModel); + } + return new PerformResult(caughtExeption ? CronJobResult.FAILURE : CronJobResult.SUCCESS, + CronJobStatus.FINISHED); + } + + private Set getDeletionTableSetFromItemType(Set incMigrationItems) { + String deletionItemTypes = Config + .getString(CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_DELETIONS_ITEMTYPES, ""); + if (StringUtils.isEmpty(deletionItemTypes)) { + return Collections.emptySet(); + } + + final Set result = new TreeSet<>(String.CASE_INSENSITIVE_ORDER); + + final List itemtypesArray = Splitter.on(",") + .omitEmptyStrings() + .trimResults() + .splitToList(deletionItemTypes.toLowerCase()); + + String tableName; + for(String itemType : itemtypesArray){ + tableName = typeService.getComposedTypeForCode(itemType).getTable(); + + if(StringUtils.startsWith(tableName,tablePrefix)){ + tableName = StringUtils.removeStart(tableName,tablePrefix); + } + if(incMigrationItems.contains(tableName)){ + result.add(tableName); + } + } + return result; + } + + private Set getDeletionTableSetFromTypeCodes(Set incMigrationItems) { + String deletionTypecodes = Config + .getString(CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_DELETIONS_TYPECODES, ""); + if (StringUtils.isEmpty(deletionTypecodes)) { + return Collections.emptySet(); + } + + final Set result = new TreeSet<>(String.CASE_INSENSITIVE_ORDER); + + final List typecodeArray = Splitter.on(",") + .omitEmptyStrings() + .trimResults() + .splitToList(deletionTypecodes.toLowerCase()); + + String tableName; + for(String typecode : typecodeArray){ + tableName = TypeManager.getInstance() + .getRootComposedType(Integer.valueOf(typecode)).getTable(); + + if(StringUtils.startsWith(tableName,tablePrefix)){ + tableName = StringUtils.removeStart(tableName,tablePrefix); + } + if(incMigrationItems.contains(tableName)){ + result.add(tableName); + } + } + return result; + } + + // TO do , change to static varriable + private Set getDeletionTableSet(Set incMigrationItems){ + if(Config + .getBoolean(CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_DELETIONS_TYPECODES_ENABLED, false)){ + return getDeletionTableSetFromTypeCodes(incMigrationItems); + } + else if(Config + .getBoolean(CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_DELETIONS_ITEMTYPES_ENABLED, false)){ + getDeletionTableSetFromItemType(incMigrationItems); + } + return Collections.emptySet(); + } + + + private boolean isSchemaMigrationRequired(Set deletionTableSet) throws Exception { + String TABLE_EXISTS_SELECT_STATEMENT; + if(incrementalMigrationContext.getDataTargetRepository().getDatabaseProvider().isHanaUsed()){ + TABLE_EXISTS_SELECT_STATEMENT = TABLE_EXISTS_SELECT_STATEMENT_HANA; + } else if(incrementalMigrationContext.getDataTargetRepository().getDatabaseProvider().isOracleUsed()){ + TABLE_EXISTS_SELECT_STATEMENT = TABLE_EXISTS_SELECT_STATEMENT_ORACLE; + } else if(incrementalMigrationContext.getDataTargetRepository().getDatabaseProvider().isMssqlUsed()){ + TABLE_EXISTS_SELECT_STATEMENT = TABLE_EXISTS_SELECT_STATEMENT_MSSQL; + }else if(incrementalMigrationContext.getDataTargetRepository().getDatabaseProvider().isPostgreSqlUsed()){ + TABLE_EXISTS_SELECT_STATEMENT = TABLE_EXISTS_SELECT_STATEMENT_POSTGRES; + } else{ + TABLE_EXISTS_SELECT_STATEMENT = TABLE_EXISTS_SELECT_STATEMENT_MSSQL; + } + try ( + Connection connection = incrementalMigrationContext.getDataTargetRepository() + .getConnection(); + Statement stmt = connection.createStatement(); + ) { + for (final String tableName : deletionTableSet) { + try (ResultSet resultSet = stmt.executeQuery(String.format(TABLE_EXISTS_SELECT_STATEMENT, + incrementalMigrationContext.getDataTargetRepository().getDataSourceConfiguration() + .getSchema(), tableName)); + ) { + String TABLE_NAME = null; + if (resultSet.next()) { + //TABLE_NAME = resultSet.getString("TABLE_NAME"); + } else { + return true; + } + } + } + } + return false; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/listeners/DefaultCMTAfterSaveListener.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/listeners/DefaultCMTAfterSaveListener.java new file mode 100644 index 0000000..1534346 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/listeners/DefaultCMTAfterSaveListener.java @@ -0,0 +1,116 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.listeners; + +import com.google.common.base.Preconditions; +import com.google.common.base.Splitter; +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import de.hybris.platform.jalo.type.TypeManager; +import de.hybris.platform.servicelayer.model.ModelService; +import de.hybris.platform.servicelayer.type.TypeService; +import de.hybris.platform.tx.AfterSaveEvent; +import de.hybris.platform.tx.AfterSaveListener; +import de.hybris.platform.util.Config; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import org.apache.commons.lang.StringUtils; +import com.sap.cx.boosters.commercedbsync.enums.ItemChangeType; +import com.sap.cx.boosters.commercedbsync.model.ItemDeletionMarkerModel; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * DefaultCMTAfterSaveListener is an implementation of {@link AfterSaveListener} for use with + * capturing changes to Delete operations for any configured data models. + * + */ +public class DefaultCMTAfterSaveListener implements AfterSaveListener { + + private static final Logger LOG = LoggerFactory.getLogger(DefaultCMTAfterSaveListener.class); + + private ModelService modelService; + + private static final String COMMA_SEPERATOR = ","; + + private TypeService typeService; + + private static final boolean deletionsEnabled = Config + .getBoolean(CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_DELETIONS_TYPECODES_ENABLED,false); + + + @Override + public void afterSave(final Collection events) { + if (!deletionsEnabled) { + if (LOG.isDebugEnabled()) { + LOG.debug("CMT deletions is not enabled for ItemModel."); + } + return; + } + + List deletionsTypeCode = getListDeletionsTypeCode(); + + if (deletionsTypeCode == null || deletionsTypeCode.isEmpty()) { + if (LOG.isDebugEnabled()) { + LOG.debug("No typecode defined to create a deletion record for CMT "); + } + return; + } + events.forEach(event -> { + { + final int type = event.getType(); + final String typeCodeAsString = event.getPk().getTypeCodeAsString(); + if (AfterSaveEvent.REMOVE == type && deletionsTypeCode.contains(typeCodeAsString)) { + final String tableName = TypeManager.getInstance() + .getRootComposedType(event.getPk().getTypeCode()).getTable(); + final ItemDeletionMarkerModel idm = modelService.create(ItemDeletionMarkerModel.class); + convertAndfillInitialDeletionMarker(idm, event.getPk().getLong(), + tableName); + modelService.save(idm); + + } + } + }); + + } + + private void convertAndfillInitialDeletionMarker(final ItemDeletionMarkerModel marker, final Long itemPK, + final String table) + { + Preconditions.checkNotNull(marker, "ItemDeletionMarker cannot be null in this place"); + Preconditions + .checkArgument(marker.getItemModelContext().isNew(), "ItemDeletionMarker must be new"); + + marker.setItemPK(itemPK); + marker.setTable(table); + marker.setChangeType(ItemChangeType.DELETED); + } + + + // TO DO change to static variable + private List getListDeletionsTypeCode() { + final String typeCodes = Config.getString( + CommercedbsyncConstants.MIGRATION_DATA_INCREMENTAL_DELETIONS_TYPECODES, ""); + if (StringUtils.isEmpty(typeCodes)) { + return Collections.emptyList(); + } + List result = Splitter.on(COMMA_SEPERATOR) + .omitEmptyStrings() + .trimResults() + .splitToList(typeCodes); + + return result; + } + + public void setModelService(final ModelService modelService) + { + this.modelService = modelService; + } + + public void setTypeService(TypeService typeService) { + this.typeService = typeService; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/performance/PerformanceCategory.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/performance/PerformanceCategory.java new file mode 100644 index 0000000..4a794b5 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/performance/PerformanceCategory.java @@ -0,0 +1,11 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.performance; + +public enum PerformanceCategory { + DB_READ, DB_WRITE +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/performance/PerformanceProfiler.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/performance/PerformanceProfiler.java new file mode 100644 index 0000000..0fe3652 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/performance/PerformanceProfiler.java @@ -0,0 +1,26 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.performance; + +import java.util.Collection; +import java.util.concurrent.ConcurrentHashMap; + +public interface PerformanceProfiler { + PerformanceRecorder createRecorder(PerformanceCategory category, String name); + + void muteRecorder(PerformanceCategory category, String name); + + ConcurrentHashMap getRecorders(); + + Collection getRecordersByCategory(PerformanceCategory category); + + double getAverageByCategoryAndUnit(PerformanceCategory category, PerformanceUnit unit); + + PerformanceRecorder getRecorder(PerformanceCategory category, String name); + + void reset(); +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/performance/PerformanceRecorder.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/performance/PerformanceRecorder.java new file mode 100644 index 0000000..feff5c1 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/performance/PerformanceRecorder.java @@ -0,0 +1,143 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.performance; + +import com.google.common.base.Joiner; +import com.google.common.base.Stopwatch; +import com.google.common.util.concurrent.AtomicDouble; + +import javax.annotation.concurrent.ThreadSafe; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.TimeUnit; + +/** + * + */ +public class PerformanceRecorder { + + private ConcurrentHashMap records = new ConcurrentHashMap<>(); + + private Stopwatch timer; + private PerformanceCategory category; + private String name; + + public PerformanceRecorder(PerformanceCategory category, String name) { + this(category, name, false); + } + + public PerformanceRecorder(PerformanceCategory category, String name, boolean autoStart) { + this.category = category; + this.name = name; + if (autoStart) { + this.timer = Stopwatch.createStarted(); + } else { + this.timer = Stopwatch.createUnstarted(); + } + } + + public void start() { + this.timer.start(); + } + + public void pause() { + this.timer.stop(); + } + + public String getName() { + return name; + } + + public PerformanceCategory getCategory() { + return category; + } + + public void record(PerformanceUnit unit, double value) { + if (getRecords().containsKey(unit)) { + getRecords().get(unit).submit(value); + } else { + PerformanceAggregation performanceAggregation = new PerformanceAggregation(getTimer(), unit); + performanceAggregation.submit(value); + getRecords().put(unit, performanceAggregation); + } + } + + public ConcurrentHashMap getRecords() { + return records; + } + + private Stopwatch getTimer() { + return timer; + } + + @Override + public String toString() { + return "PerformanceRecorder{name=" + getName() + ",{" + Joiner.on("},{").join(getRecords().values()) + "}}"; + } + + @ThreadSafe + public static class PerformanceAggregation { + + private Stopwatch timer; + private PerformanceUnit performanceUnit; + private TimeUnit timeUnit = TimeUnit.SECONDS; + private AtomicDouble sum = new AtomicDouble(0); + private AtomicDouble max = new AtomicDouble(0); + private AtomicDouble min = new AtomicDouble(0); + private AtomicDouble avg = new AtomicDouble(0); + + public PerformanceAggregation(Stopwatch timer, PerformanceUnit performanceUnit) { + this.performanceUnit = performanceUnit; + this.timer = timer; + } + + protected void submit(double value) { + getTotalThroughput().addAndGet(value); + long elapsed = timer.elapsed(TimeUnit.MILLISECONDS); + float elapsedToSeconds = elapsed / 1000f; + if (elapsedToSeconds > 0) { + getAvgThroughput().set(getTotalThroughput().get() / elapsedToSeconds); + getMaxThroughput().set(Math.max(getMaxThroughput().get(), getAvgThroughput().get())); + getMinThroughput().set(Math.max(getMinThroughput().get(), getAvgThroughput().get())); + } + } + + public PerformanceUnit getPerformanceUnit() { + return performanceUnit; + } + + public AtomicDouble getTotalThroughput() { + return sum; + } + + public AtomicDouble getAvgThroughput() { + return avg; + } + + public AtomicDouble getMinThroughput() { + return min; + } + + public AtomicDouble getMaxThroughput() { + return max; + } + + public TimeUnit getTimeUnit() { + return timeUnit; + } + + @Override + public String toString() { + return "PerformanceAggregation{" + + "performanceUnit=" + performanceUnit + + ", sum=" + sum + + ", max=" + max + " " + performanceUnit + "/" + timeUnit + + ", min=" + min + " " + performanceUnit + "/" + timeUnit + + ", avg=" + avg + " " + performanceUnit + "/" + timeUnit + + '}'; + } + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/performance/PerformanceUnit.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/performance/PerformanceUnit.java new file mode 100644 index 0000000..c2d5cfd --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/performance/PerformanceUnit.java @@ -0,0 +1,11 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.performance; + +public enum PerformanceUnit { + ROWS, MB +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/performance/impl/DefaultPerformanceProfiler.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/performance/impl/DefaultPerformanceProfiler.java new file mode 100644 index 0000000..043eec0 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/performance/impl/DefaultPerformanceProfiler.java @@ -0,0 +1,67 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.performance.impl; + +import com.sap.cx.boosters.commercedbsync.performance.PerformanceCategory; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceProfiler; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceRecorder; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceUnit; + +import java.util.Collection; +import java.util.concurrent.ConcurrentHashMap; +import java.util.stream.Collectors; + +public class DefaultPerformanceProfiler implements PerformanceProfiler { + + private ConcurrentHashMap recorders = new ConcurrentHashMap<>(); + + + @Override + public PerformanceRecorder createRecorder(PerformanceCategory category, String name) { + String recorderName = createRecorderName(category, name); + return recorders.computeIfAbsent(recorderName, key -> new PerformanceRecorder(category, recorderName)); + } + + @Override + public void muteRecorder(PerformanceCategory category, String name) { + String recorderName = createRecorderName(category, name); + this.recorders.remove(recorderName); + } + + @Override + public ConcurrentHashMap getRecorders() { + return recorders; + } + + @Override + public Collection getRecordersByCategory(PerformanceCategory category) { + return recorders.values().stream().filter(r -> category == r.getCategory()).collect(Collectors.toList()); + } + + @Override + public double getAverageByCategoryAndUnit(PerformanceCategory category, PerformanceUnit unit) { + Collection recordersByCategory = getRecordersByCategory(category); + return recordersByCategory.stream().filter(r -> r.getRecords().get(unit) != null).mapToDouble(r -> + r.getRecords().get(unit).getAvgThroughput().get() + ).average().orElse(0); + } + + @Override + public PerformanceRecorder getRecorder(PerformanceCategory category, String name) { + return recorders.get(createRecorderName(category, name)); + } + + @Override + public void reset() { + getRecorders().clear(); + } + + protected String createRecorderName(PerformanceCategory category, String name) { + return category + "->" + name; + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/processors/MigrationPostProcessor.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/processors/MigrationPostProcessor.java new file mode 100644 index 0000000..b028ccf --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/processors/MigrationPostProcessor.java @@ -0,0 +1,17 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.processors; + +import com.sap.cx.boosters.commercedbsync.context.CopyContext; + +/** + * Postprocessor activated after a migration has terminated + */ +public interface MigrationPostProcessor { + + void process(CopyContext context); +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/processors/impl/AdjustActiveTypeSystemPostProcessor.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/processors/impl/AdjustActiveTypeSystemPostProcessor.java new file mode 100644 index 0000000..c28776f --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/processors/impl/AdjustActiveTypeSystemPostProcessor.java @@ -0,0 +1,95 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.processors.impl; + +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.repository.DataRepository; +import de.hybris.platform.servicelayer.config.ConfigurationService; +import org.apache.commons.lang3.StringUtils; +import com.sap.cx.boosters.commercedbsync.processors.MigrationPostProcessor; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.util.Arrays; + +public class AdjustActiveTypeSystemPostProcessor implements MigrationPostProcessor { + + private static final Logger LOG = LoggerFactory.getLogger(AdjustActiveTypeSystemPostProcessor.class.getName()); + + private static final String CCV2_TS_MIGRATION_TABLE = "CCV2_TYPESYSTEM_MIGRATIONS"; + private static final String TYPESYSTEM_ADJUST_STATEMENT = "IF (EXISTS (SELECT * \n" + + " FROM INFORMATION_SCHEMA.TABLES \n" + + " WHERE TABLE_SCHEMA = '%s' \n" + + " AND TABLE_NAME = '%3$s'))\n" + + "BEGIN\n" + + " UPDATE [%3$s] SET [state] = 'retired' WHERE 1=1;\n" + + " UPDATE [%3$s] SET [state] = 'current', [comment] = 'Updated by CMT' WHERE [name] = '%s';\n" + + "END"; + // ORACLR_TARGET - START + private static final String[] TRUEVALUES = new String[] { "yes", "y", "true", "0" }; + private static final String CMT_DISABLED_POST_PROCESSOR = "migration.data.postprocessor.tscheck.disable"; + private ConfigurationService configurationService; + + /** + * @return the configurationService + */ + public ConfigurationService getConfigurationService() { + return configurationService; + } + + /** + * @param configurationService + * the configurationService to set + */ + public void setConfigurationService(final ConfigurationService configurationService) { + this.configurationService = configurationService; + } + + @Override + public void process(final CopyContext context) { + + if (isPostProcesorDisabled()) { + LOG.info("TS post processor is disabled "); + return; + } + final DataRepository targetRepository = context.getMigrationContext().getDataTargetRepository(); + final String typeSystemName = targetRepository.getDataSourceConfiguration().getTypeSystemName(); + + try ( Connection connection = targetRepository.getConnection(); + PreparedStatement statement = connection.prepareStatement(String.format(TYPESYSTEM_ADJUST_STATEMENT, + targetRepository.getDataSourceConfiguration().getSchema(), typeSystemName, + getMigrationsTableName(targetRepository))); + ) { + statement.execute(); + + LOG.info("Adjusted active type system to: " + typeSystemName); + } catch (SQLException e) { + LOG.error("Error executing post processor (SQLException) ", e); + } catch (Exception e) { + LOG.error("Error executing post processor", e); + } + } + + private String getMigrationsTableName(final DataRepository repository) { + return StringUtils.trimToEmpty(repository.getDataSourceConfiguration().getTablePrefix()) + .concat(CCV2_TS_MIGRATION_TABLE); + } + + private boolean isPostProcesorDisabled() { + final String ccv2DisabledProperties = getConfigurationService().getConfiguration() + .getString(CMT_DISABLED_POST_PROCESSOR); + // boolean disabled = false; + if (ccv2DisabledProperties == null || ccv2DisabledProperties.isEmpty()) { + return false; + } + return Arrays.stream(TRUEVALUES).anyMatch(ccv2DisabledProperties::equalsIgnoreCase); + // return disabled; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/processors/impl/DefaultMigrationPostProcessor.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/processors/impl/DefaultMigrationPostProcessor.java new file mode 100644 index 0000000..f7d4745 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/processors/impl/DefaultMigrationPostProcessor.java @@ -0,0 +1,25 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.processors.impl; + +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.processors.MigrationPostProcessor; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Implements the {@link MigrationPostProcessor} + */ +public class DefaultMigrationPostProcessor implements MigrationPostProcessor { + + private static final Logger LOG = LoggerFactory.getLogger(DefaultMigrationPostProcessor.class.getName()); + + @Override + public void process(CopyContext context) { + LOG.info("DefaultMigrationPostProcessor Finished"); + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/processors/impl/ReportMigrationPostProcessor.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/processors/impl/ReportMigrationPostProcessor.java new file mode 100644 index 0000000..ec8738a --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/processors/impl/ReportMigrationPostProcessor.java @@ -0,0 +1,50 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.processors.impl; + +import com.google.gson.Gson; +import com.google.gson.GsonBuilder; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationReportService; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationReportStorageService; +import com.sap.cx.boosters.commercedbsync.MigrationReport; +import com.sap.cx.boosters.commercedbsync.processors.MigrationPostProcessor; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.ByteArrayInputStream; +import java.io.InputStream; +import java.nio.charset.StandardCharsets; + +public class ReportMigrationPostProcessor implements MigrationPostProcessor { + + private static final Logger LOG = LoggerFactory.getLogger(ReportMigrationPostProcessor.class.getName()); + + private DatabaseMigrationReportService databaseMigrationReportService; + private DatabaseMigrationReportStorageService databaseMigrationReportStorageService; + + @Override + public void process(CopyContext context) { + try { + Gson gson = new GsonBuilder().setPrettyPrinting().create(); + MigrationReport migrationReport = databaseMigrationReportService.getMigrationReport(context); + InputStream is = new ByteArrayInputStream(gson.toJson(migrationReport).getBytes(StandardCharsets.UTF_8)); + databaseMigrationReportStorageService.store(context.getMigrationId() + ".json", is); + LOG.info("Finished writing database migration report"); + } catch (Exception e) { + LOG.error("Error executing post processor", e); + } + } + + public void setDatabaseMigrationReportService(DatabaseMigrationReportService databaseMigrationReportService) { + this.databaseMigrationReportService = databaseMigrationReportService; + } + + public void setDatabaseMigrationReportStorageService(DatabaseMigrationReportStorageService databaseMigrationReportStorageService) { + this.databaseMigrationReportStorageService = databaseMigrationReportStorageService; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/profile/DataSourceConfiguration.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/profile/DataSourceConfiguration.java new file mode 100644 index 0000000..e869f26 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/profile/DataSourceConfiguration.java @@ -0,0 +1,40 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.profile; + +/** + * Contains a DataSource Configuration + */ +public interface DataSourceConfiguration { + String getProfile(); + + String getDriver(); + + String getConnectionString(); + + String getUserName(); + + String getPassword(); + + String getSchema(); + + String getTypeSystemName(); + + String getTypeSystemSuffix(); + + String getCatalog(); + + String getTablePrefix(); + + int getMaxActive(); + + int getMaxIdle(); + + int getMinIdle(); + + boolean isRemoveAbandoned(); +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/profile/impl/DefaultDataSourceConfiguration.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/profile/impl/DefaultDataSourceConfiguration.java new file mode 100644 index 0000000..796a8d0 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/profile/impl/DefaultDataSourceConfiguration.java @@ -0,0 +1,161 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.profile.impl; + +import org.apache.commons.configuration.Configuration; +import org.apache.commons.lang.StringUtils; +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; + +/** + * Contains the JDBC DataSource Configuration + */ +public class DefaultDataSourceConfiguration implements DataSourceConfiguration { + + private String profile; + private Configuration configuration; + private String driver; + private String connectionString; + private String userName; + private String password; + private String schema; + private String catalog; + private String tablePrefix; + private String typeSystemName; + private String typeSystemSuffix; + private int maxActive; + private int maxIdle; + private int minIdle; + private boolean removedAbandoned; + + public DefaultDataSourceConfiguration(Configuration configuration, String profile) { + this.profile = profile; + this.configuration = configuration; + this.load(configuration, profile); + } + + @Override + public String getProfile() { + return profile; + } + + @Override + public String getDriver() { + return driver; + } + + @Override + public String getConnectionString() { + return connectionString; + } + + @Override + public String getUserName() { + return userName; + } + + @Override + public String getPassword() { + return password; + } + + @Override + public String getSchema() { + return schema; + } + + @Override + public String getTypeSystemName() { + this.typeSystemName = getProfileProperty(profile, configuration, "db.typesystemname"); + return typeSystemName; + } + + @Override + public String getTypeSystemSuffix() { + this.typeSystemSuffix = getProfileProperty(profile, configuration, "db.typesystemsuffix"); + return typeSystemSuffix; + } + + @Override + public String getCatalog() { + return catalog; + } + + @Override + public String getTablePrefix() { + return tablePrefix; + } + + @Override + public int getMaxActive() { + return maxActive; + } + + @Override + public int getMaxIdle() { + return maxIdle; + } + + @Override + public int getMinIdle() { + return minIdle; + } + + @Override + public boolean isRemoveAbandoned() { + return removedAbandoned; + } + + protected void load(Configuration configuration, String profile) { + this.driver = getProfileProperty(profile, configuration, "db.driver"); + this.connectionString = getProfileProperty(profile, configuration, "db.url"); + this.userName = getProfileProperty(profile, configuration, "db.username"); + this.password = getProfileProperty(profile, configuration, "db.password"); + this.schema = getProfileProperty(profile, configuration, "db.schema"); + this.catalog = getProfileProperty(profile, configuration, "db.catalog"); + this.tablePrefix = getProfileProperty(profile, configuration, "db.tableprefix"); + this.typeSystemName = getProfileProperty(profile, configuration, "db.typesystemname"); + this.typeSystemSuffix = getProfileProperty(profile, configuration, "db.typesystemsuffix"); + this.maxActive = parseInt(getProfileProperty(profile, configuration, "db.connection.pool.size.active.max")); + this.maxIdle = parseInt(getProfileProperty(profile, configuration, "db.connection.pool.size.idle.max")); + this.minIdle = parseInt(getProfileProperty(profile, configuration, "db.connection.pool.size.idle.min")); + this.removedAbandoned = Boolean.parseBoolean(getProfileProperty(profile, configuration, "db.connection.removeabandoned")); + } + + protected String getNormalProperty(Configuration configuration, String key) { + return checkProperty(configuration.getString(key), key); + } + + protected int parseInt(String value) { + if (StringUtils.isEmpty(value)) { + return 0; + } else { + return Integer.parseInt(value); + } + } + + protected String getProfileProperty(String profile, Configuration configuration, String key) { + String profilePropertyKey = createProfilePropertyKey(key, profile); + String property = configuration.getString(profilePropertyKey); + if (StringUtils.startsWith(property, "${")) { + property = configuration.getString(StringUtils.substringBetween(property, "{", "}")); + } + return checkProperty(property, profilePropertyKey); + } + + protected String checkProperty(String property, String key) { + if (property != null) { + return property; + } else { + throw new IllegalArgumentException(String.format( + "property %s doesn't exist", key)); + } + } + + protected String createProfilePropertyKey(String key, String profile) { + return "migration.ds." + profile + "." + key; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/provider/CopyItemProvider.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/provider/CopyItemProvider.java new file mode 100644 index 0000000..1a6761f --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/provider/CopyItemProvider.java @@ -0,0 +1,24 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.provider; + +import com.sap.cx.boosters.commercedbsync.TableCandidate; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; + +import java.util.Set; + +/** + * Provides the means to copy an Item fro Source to Target + */ +public interface CopyItemProvider { + Set get(MigrationContext context) throws Exception; + + Set getSourceTableCandidates(MigrationContext context) throws Exception; + + Set getTargetTableCandidates(MigrationContext context) throws Exception; +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/provider/impl/DefaultDataCopyItemProvider.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/provider/impl/DefaultDataCopyItemProvider.java new file mode 100644 index 0000000..cb62665 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/provider/impl/DefaultDataCopyItemProvider.java @@ -0,0 +1,292 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.provider.impl; + +import com.google.common.collect.Sets; +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import com.sap.cx.boosters.commercedbsync.provider.CopyItemProvider; +import de.hybris.bootstrap.ddl.DataBaseProvider; +import org.apache.commons.collections4.CollectionUtils; +import org.apache.commons.lang.StringUtils; +import com.sap.cx.boosters.commercedbsync.TableCandidate; +import com.sap.cx.boosters.commercedbsync.TypeSystemTable; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.filter.DataCopyTableFilter; +import com.sap.cx.boosters.commercedbsync.repository.DataRepository; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.Arrays; +import java.util.Comparator; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; +import java.util.TreeSet; +import java.util.stream.Collectors; + +public class DefaultDataCopyItemProvider implements CopyItemProvider { + + public static final String SN_SUFFIX = "sn"; + private static final String LP_SUFFIX = "lp"; + private static final String LP_SUFFIX_UPPER = "LP"; + + private static final Logger LOG = LoggerFactory.getLogger(DefaultDataCopyItemProvider.class); + + private static final String[] TYPE_SYSTEM_RELATED_TYPES = new String[] { "atomictypes", "attributeDescriptors", + "collectiontypes", "composedtypes", "enumerationvalues", "maptypes" }; + private final Comparator tableCandidateComparator = (o1, o2) -> o1.getCommonTableName() + .compareToIgnoreCase(o2.getCommonTableName()); + private DataCopyTableFilter dataCopyTableFilter; + + private static boolean shouldMigrateAuditTable(final MigrationContext context, final String auditTableName) { + return context.isAuditTableMigrationEnabled() && StringUtils.isNotEmpty(auditTableName); + } + + // ORACLE_TARGET - START + private void logTables(final Set tablesCandidates, final String debugtext) { + if (LOG.isDebugEnabled()) { + LOG.debug("---------START------," + debugtext); + if (tablesCandidates != null && tablesCandidates.size() > 0) { + for (final TableCandidate source : tablesCandidates) { + LOG.debug("$$Table Common Name" + source.getCommonTableName() + ", Base Table =" + + source.getBaseTableName() + ", Suffix = " + source.getAdditionalSuffix() + " , Full TB = " + + source.getFullTableName() + ",Table Name = " + source.getTableName()); + /* + * if (Arrays.stream(TYPE_SYSTEM_RELATED_TYPES) .anyMatch(t + * -> + * StringUtils.startsWithIgnoreCase(source.getBaseTableName( + * ), t))) { } + */ + + } + LOG.debug("---------END------," + debugtext); + } + } + + } + // ORACLE_TARGET - END + + @Override + public Set get(final MigrationContext context) throws Exception { + final Set sourceTablesCandidates = getSourceTableCandidates(context); + final Set targetTablesCandidates = getTargetTableCandidates(context); + final Sets.SetView sourceTables = Sets.intersection(sourceTablesCandidates, + targetTablesCandidates); + + // ORACLE_TARGET --START ONLY FOR DEBUG + logTables(sourceTablesCandidates, "source table candidates"); + logTables(targetTablesCandidates, "target table candidates"); + logTables(sourceTables, "intersection tables"); + // ORACLE_TARGET --END ONLY FOR DEBUG + + final Set sourceTablesToMigrate = sourceTables.stream() + .filter(t -> dataCopyTableFilter.filter(context).test(t.getCommonTableName())) + .collect(Collectors.toSet()); + + return createCopyItems(context, sourceTablesToMigrate, targetTablesCandidates.stream() + .collect(Collectors.toMap(t -> t.getCommonTableName().toLowerCase(), t -> t))); + } + + @Override + public Set getSourceTableCandidates(final MigrationContext context) throws Exception { + return getTableCandidates(context, context.getDataSourceRepository()); + } + + @Override + public Set getTargetTableCandidates(final MigrationContext context) throws Exception { + return getAllTableCandidates(context); + } + + private Set getAllTableCandidates(final MigrationContext context) throws Exception { + final DataRepository targetRepository = context.getDataTargetRepository(); + final String prefix = targetRepository.getDataSourceConfiguration().getTablePrefix(); + + return targetRepository.getAllTableNames().stream() + .filter(n -> prefix == null || StringUtils.startsWithIgnoreCase(n, prefix)) + .map(n -> StringUtils.removeStartIgnoreCase(n, prefix)) + .filter(n -> !isNonMatchingTypesystemTable(targetRepository, n)) + .map(n -> createTableCandidate(targetRepository, n)) + .collect(Collectors.toCollection(() -> new TreeSet<>(tableCandidateComparator))); + } + + private boolean isNonMatchingTypesystemTable(final DataRepository repository, final String tableName) { + boolean isTypesystemTable = false; + // ORACLE_TARGET -- TODO SUFFIX SN_SUFFIX case ?? + if (!StringUtils.endsWithIgnoreCase(tableName, SN_SUFFIX)) { + isTypesystemTable = Arrays.stream(TYPE_SYSTEM_RELATED_TYPES) + .anyMatch(t -> StringUtils.startsWithIgnoreCase(tableName, t)); + } + if (isTypesystemTable) { + + final String additionalSuffix = getAdditionalSuffix(tableName, + repository.getDatabaseProvider()); + final String tableNameWithoutAdditionalSuffix = getTableNameWithoutAdditionalSuffix(tableName, + additionalSuffix); + final String typeSystemSuffix = repository.getDataSourceConfiguration().getTypeSystemSuffix(); + LOG.debug("$$TS table name=" + tableName + ",additionalSuffix=" + additionalSuffix + + ", tableNameWithoutAdditionalSuffix=" + tableNameWithoutAdditionalSuffix + ",typeSystemSuffix=" + + typeSystemSuffix); + LOG.debug("$$TS check=" + + !StringUtils.endsWithIgnoreCase(tableNameWithoutAdditionalSuffix, typeSystemSuffix)); + return !StringUtils.endsWithIgnoreCase(tableNameWithoutAdditionalSuffix, typeSystemSuffix); + } + return false; + } + + private Set getTableCandidates(final MigrationContext context, final DataRepository repository) + throws Exception { + final Set allTableNames = repository.getAllTableNames(); + + LOG.debug("$$ALL TABLES...getTableCandidates " + allTableNames); + final Set tableCandidates = new TreeSet<>(tableCandidateComparator); + + //add meta tables + tableCandidates.add(createTableCandidate(repository, CommercedbsyncConstants.DEPLOYMENTS_TABLE)); + tableCandidates.add(createTableCandidate(repository, "aclentries")); + tableCandidates.add(createTableCandidate(repository, "configitems")); + tableCandidates.add(createTableCandidate(repository, "numberseries")); + tableCandidates.add(createTableCandidate(repository, "metainformations")); + + //add tables listed in "ydeployments" + final Set allTypeSystemTables = repository.getAllTypeSystemTables(); + allTypeSystemTables.forEach(t -> { + tableCandidates.add(createTableCandidate(repository, t.getTableName())); + + final String propsTableName = t.getPropsTableName(); + + if (StringUtils.isNotEmpty(propsTableName)) { + tableCandidates.add(createTableCandidate(repository, t.getPropsTableName())); + } + + final TableCandidate lpTable = createTableCandidate(repository, t.getTableName() + LP_SUFFIX); + + if (allTableNames.stream().anyMatch(lpTable.getFullTableName()::equalsIgnoreCase)) { + LOG.debug("LP table Match... " + lpTable.getFullTableName()); + tableCandidates.add(lpTable); + } + + /* + * if (allTableNames.contains(lpTable.getFullTableName())) { + * tableCandidates.add(lpTable); } + */ + // ORACLE_TARGET -END + + if (shouldMigrateAuditTable(context, t.getAuditTableName())) { + final TableCandidate auditTable = createTableCandidate(repository, t.getAuditTableName()); + + // ORACLE_TARGET - START..needs to be tested.Case insensitive + // match + if (allTableNames.stream().anyMatch(auditTable.getFullTableName()::equalsIgnoreCase)) { + tableCandidates.add(lpTable); + } + + /* + * if (allTableNames.contains(auditTable.getFullTableName())) { + * tableCandidates.add(auditTable); } + */ + // ORACLE_TARGET - END + } + }); + + // custom tables + if (CollectionUtils.isNotEmpty(context.getCustomTables())) { + tableCandidates.addAll(context.getCustomTables().stream().map(t -> createTableCandidate(repository, t)) + .collect(Collectors.toSet())); + } + + return tableCandidates; + } + + private TableCandidate createTableCandidate(final DataRepository repository, final String tableName) { + final TableCandidate candidate = new TableCandidate(); + + final String additionalSuffix = getAdditionalSuffix(tableName, repository.getDatabaseProvider()); + final String tableNameWithoutAdditionalSuffix = getTableNameWithoutAdditionalSuffix(tableName, + additionalSuffix); + final String baseTableName = getTableNameWithoutTypeSystemSuffix(tableNameWithoutAdditionalSuffix, + repository.getDataSourceConfiguration().getTypeSystemSuffix()); + final boolean isTypeSystemRelatedTable = isTypeSystemRelatedTable(baseTableName); + candidate.setCommonTableName(baseTableName + additionalSuffix); + candidate.setTableName(tableName); + candidate.setFullTableName(repository.getDataSourceConfiguration().getTablePrefix() + tableName); + candidate.setAdditionalSuffix(additionalSuffix); + candidate.setBaseTableName(baseTableName); + candidate.setTypeSystemRelatedTable(isTypeSystemRelatedTable); + return candidate; + } + + private boolean isTypeSystemRelatedTable(final String tableName) { + return Arrays.stream(TYPE_SYSTEM_RELATED_TYPES).anyMatch(tableName::equalsIgnoreCase); + } + + private String getAdditionalSuffix(final String tableName, final DataBaseProvider dataBaseProvider) { + // ORACLE_TARGET - START + if (dataBaseProvider.isOracleUsed() && (StringUtils.endsWith(tableName, LP_SUFFIX_UPPER))) { + return LP_SUFFIX_UPPER; + }// ORACLE_TARGET - END + else if(dataBaseProvider.isHanaUsed() && (StringUtils.endsWith(tableName, LP_SUFFIX_UPPER))){ + return LP_SUFFIX_UPPER; + }else if (StringUtils.endsWithIgnoreCase(tableName, LP_SUFFIX)) { + return LP_SUFFIX; + } else { + return StringUtils.EMPTY; + } + } + + private String getTableNameWithoutTypeSystemSuffix(final String tableName, final String suffix) { + return StringUtils.removeEnd(tableName, suffix); + } + + private String getTableNameWithoutAdditionalSuffix(final String tableName, final String suffix) { + return StringUtils.removeEnd(tableName, suffix); + } + + private Set createCopyItems(final MigrationContext context, + final Set sourceTablesToMigrate, final Map targetTablesToMigrate) { + final Set copyItems = new HashSet<>(); + for (final TableCandidate sourceTableToMigrate : sourceTablesToMigrate) { + final String targetTableKey = sourceTableToMigrate.getCommonTableName().toLowerCase(); + + LOG.debug("Eligible Tables to Migrate =" + targetTableKey); + if (targetTablesToMigrate.containsKey(targetTableKey)) { + final TableCandidate targetTableToMigrate = targetTablesToMigrate.get(targetTableKey); + copyItems.add(createCopyItem(context, sourceTableToMigrate, targetTableToMigrate)); + } else { + throw new IllegalStateException("Target table must exists"); + } + } + return copyItems; + } + + private CopyContext.DataCopyItem createCopyItem(final MigrationContext context, final TableCandidate sourceTable, + final TableCandidate targetTable) { + final String sourceTableName = sourceTable.getFullTableName(); + final String targetTableName = targetTable.getFullTableName(); + final CopyContext.DataCopyItem dataCopyItem = new CopyContext.DataCopyItem(sourceTableName, targetTableName); + addColumnMappingsIfNecessary(context, sourceTable, dataCopyItem); + return dataCopyItem; + } + + private void addColumnMappingsIfNecessary(final MigrationContext context, final TableCandidate sourceTable, + final CopyContext.DataCopyItem dataCopyItem) { + if (sourceTable.getCommonTableName().equalsIgnoreCase(CommercedbsyncConstants.DEPLOYMENTS_TABLE)) { + final String sourceTypeSystemName = context.getDataSourceRepository().getDataSourceConfiguration() + .getTypeSystemName(); + final String targetTypeSystemName = context.getDataTargetRepository().getDataSourceConfiguration() + .getTypeSystemName(); + // Add mapping to override the TypeSystemName value in target table + if (!sourceTypeSystemName.equalsIgnoreCase(targetTypeSystemName)) { + dataCopyItem.getColumnMap().put("TypeSystemName", targetTypeSystemName); + } + } + } + + public void setDataCopyTableFilter(final DataCopyTableFilter dataCopyTableFilter) { + this.dataCopyTableFilter = dataCopyTableFilter; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/DataRepository.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/DataRepository.java new file mode 100644 index 0000000..1a73b08 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/DataRepository.java @@ -0,0 +1,94 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository; + + +import com.sap.cx.boosters.commercedbsync.dataset.DataSet; +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import de.hybris.bootstrap.ddl.DataBaseProvider; +import org.apache.ddlutils.Platform; +import org.apache.ddlutils.model.Database; +import com.sap.cx.boosters.commercedbsync.MarkersQueryDefinition; +import com.sap.cx.boosters.commercedbsync.OffsetQueryDefinition; +import com.sap.cx.boosters.commercedbsync.SeekQueryDefinition; +import com.sap.cx.boosters.commercedbsync.TypeSystemTable; +import org.springframework.core.io.Resource; + +import javax.sql.DataSource; +import java.sql.Connection; +import java.sql.SQLException; +import java.time.Instant; +import java.util.Set; + +/** + * + */ +public interface DataRepository { + Database asDatabase(); + + Database asDatabase(boolean reload); + + Set getAllTableNames() throws Exception; + + Set getAllTypeSystemTables() throws Exception; + + boolean isAuditTable(String table) throws Exception; + + Set getAllColumnNames(String table) throws Exception; + + DataSet getBatchWithoutIdentifier(OffsetQueryDefinition queryDefinition) throws Exception; + + DataSet getBatchWithoutIdentifier(OffsetQueryDefinition queryDefinition, Instant time) throws Exception; + + DataSet getBatchOrderedByColumn(SeekQueryDefinition queryDefinition) throws Exception; + + DataSet getBatchOrderedByColumn(SeekQueryDefinition queryDefinition, Instant time) throws Exception; + + DataSet getBatchMarkersOrderedByColumn(MarkersQueryDefinition queryDefinition) throws Exception; + + long getRowCount(String table) throws Exception; + + long getRowCountModifiedAfter(String table, Instant time, boolean isDeletionEnabled, boolean lpTableMigrationEnabled) throws SQLException; + + long getRowCountModifiedAfter(String table, Instant time) throws SQLException; + + DataSet getAll(String table) throws Exception; + + DataSet getAllModifiedAfter(String table, Instant time) throws Exception; + + DataSourceConfiguration getDataSourceConfiguration(); + + int executeUpdateAndCommit(String updateStatement) throws Exception; + + void runSqlScript(final Resource resource); + + float getDatabaseUtilization() throws SQLException; + + int truncateTable(String table) throws Exception; + + void disableIndexesOfTable(String table) throws Exception; + + void enableIndexesOfTable(String table) throws SQLException; + + void dropIndexesOfTable(String table) throws SQLException; + + Platform asPlatform(); + + Platform asPlatform(boolean reload); + + DataBaseProvider getDatabaseProvider(); + + Connection getConnection() throws Exception; + + DataSource getDataSource(); + + DataSet getBatchMarkersOrderedByColumn(MarkersQueryDefinition queryDefinition, Instant time) throws Exception; + + DataSet getUniqueColumns(String table) throws Exception; + + boolean validateConnection() throws Exception; +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/AbstractDataRepository.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/AbstractDataRepository.java new file mode 100644 index 0000000..dd92a92 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/AbstractDataRepository.java @@ -0,0 +1,580 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.impl; + +import com.google.common.base.Joiner; +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import com.sap.cx.boosters.commercedbsync.dataset.impl.DefaultDataColumn; +import com.sap.cx.boosters.commercedbsync.dataset.impl.DefaultDataSet; +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationDataTypeMapperService; +import de.hybris.bootstrap.ddl.DatabaseSettings; +import de.hybris.bootstrap.ddl.HybrisPlatformFactory; +import de.hybris.bootstrap.ddl.tools.persistenceinfo.PersistenceInformation; +import org.apache.commons.lang3.StringUtils; +import org.apache.ddlutils.Platform; +import org.apache.ddlutils.model.Database; +import com.sap.cx.boosters.commercedbsync.MarkersQueryDefinition; +import com.sap.cx.boosters.commercedbsync.OffsetQueryDefinition; +import com.sap.cx.boosters.commercedbsync.SeekQueryDefinition; +import com.sap.cx.boosters.commercedbsync.TypeSystemTable; +import com.sap.cx.boosters.commercedbsync.dataset.DataColumn; +import com.sap.cx.boosters.commercedbsync.dataset.DataSet; +import com.sap.cx.boosters.commercedbsync.datasource.MigrationDataSourceFactory; +import com.sap.cx.boosters.commercedbsync.datasource.impl.DefaultMigrationDataSourceFactory; +import com.sap.cx.boosters.commercedbsync.repository.DataRepository; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.core.io.Resource; +import org.springframework.jdbc.datasource.init.ResourceDatabasePopulator; + +import javax.sql.DataSource; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Timestamp; +import java.time.Instant; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Set; +import java.util.TreeSet; +import java.util.concurrent.ConcurrentHashMap; + +/** + * Base information an a + */ +public abstract class AbstractDataRepository implements DataRepository { + + private static final Logger LOG = LoggerFactory.getLogger(AbstractDataRepository.class); + + private final Map dataSourceHolder = new ConcurrentHashMap<>(); + + private final DataSourceConfiguration dataSourceConfiguration; + private final MigrationDataSourceFactory migrationDataSourceFactory; + private final DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService; + private Platform platform; + private Database database; + + public AbstractDataRepository(DataSourceConfiguration dataSourceConfiguration, DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService) { + this(dataSourceConfiguration, databaseMigrationDataTypeMapperService, new DefaultMigrationDataSourceFactory()); + } + + public AbstractDataRepository(DataSourceConfiguration dataSourceConfiguration, DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService, MigrationDataSourceFactory migrationDataSourceFactory) { + this.dataSourceConfiguration = dataSourceConfiguration; + this.migrationDataSourceFactory = migrationDataSourceFactory; + this.databaseMigrationDataTypeMapperService = databaseMigrationDataTypeMapperService; + } + + @Override + public DataSourceConfiguration getDataSourceConfiguration() { + return dataSourceConfiguration; + } + + @Override + public DataSource getDataSource() { + return dataSourceHolder.computeIfAbsent("DATASOURCE", s -> migrationDataSourceFactory.create(dataSourceConfiguration)); + } + + public Connection getConnection() throws SQLException { + Connection connection = getDataSource().getConnection(); + connection.setAutoCommit(false); + return connection; + } + + @Override + public int executeUpdateAndCommit(String updateStatement) throws SQLException { + try (Connection conn = getConnectionForUpdateAndCommit(); + Statement statement = conn.createStatement() + ) { + return statement.executeUpdate(updateStatement); + } + } + + public Connection getConnectionForUpdateAndCommit() throws SQLException { + Connection connection = getDataSource().getConnection(); + connection.setAutoCommit(true); + return connection; + } + + @Override + public void runSqlScript(Resource resource) { + final ResourceDatabasePopulator databasePopulator = new ResourceDatabasePopulator(resource); + databasePopulator.setIgnoreFailedDrops(true); + databasePopulator.execute(getDataSource()); + } + + @Override + public float getDatabaseUtilization() throws SQLException { + throw new UnsupportedOperationException("Must be added in the specific repository implementation"); + } + + @Override + public int truncateTable(String table) throws SQLException { + return executeUpdateAndCommit(String.format("truncate table %s", table)); + } + + @Override + public long getRowCount(String table) throws SQLException { + List conditionsList = new ArrayList<>(1); + processDefaultConditions(table, conditionsList); + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection(); + Statement stmt = connection.createStatement(); + ResultSet resultSet = stmt.executeQuery(String.format("select count(*) from %s where %s", table, expandConditions(conditions))) + ) { + long value = 0; + if (resultSet.next()) { + value = resultSet.getLong(1); + } + return value; + } + } + + @Override + public long getRowCountModifiedAfter(String table, Instant time, boolean isDeletionEnabled, boolean lpTableMigrationEnabled) + throws SQLException { + return getRowCountModifiedAfter(table,time); + } + + @Override + public long getRowCountModifiedAfter(String table, Instant time) throws SQLException { + List conditionsList = new ArrayList<>(1); + processDefaultConditions(table, conditionsList); + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection()) { + try (PreparedStatement stmt = connection.prepareStatement(String.format("select count(*) from %s where modifiedts > ? AND %s", table, expandConditions(conditions)))) { + stmt.setTimestamp(1, Timestamp.from(time)); + ResultSet resultSet = stmt.executeQuery(); + long value = 0; + if (resultSet.next()) { + value = resultSet.getLong(1); + } + return value; + } + } + } + + @Override + public DataSet getAll(String table) throws Exception { + List conditionsList = new ArrayList<>(1); + processDefaultConditions(table, conditionsList); + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection(); + Statement stmt = connection.createStatement(); + ResultSet resultSet = stmt.executeQuery(String.format("select * from %s where %s", table, expandConditions(conditions))) + ) { + return convertToDataSet(resultSet); + } + } + + @Override + public DataSet getAllModifiedAfter(String table, Instant time) throws Exception { + List conditionsList = new ArrayList<>(1); + processDefaultConditions(table, conditionsList); + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection()) { + try (PreparedStatement stmt = connection.prepareStatement(String.format("select * from %s where modifiedts > ? and %s", table, expandConditions(conditions)))) { + stmt.setTimestamp(1, Timestamp.from(time)); + ResultSet resultSet = stmt.executeQuery(); + return convertToDataSet(resultSet); + } + } + } + + protected DefaultDataSet convertToDataSet(ResultSet resultSet) throws Exception { + return convertToDataSet(resultSet, Collections.emptySet()); + } + + protected DefaultDataSet convertToDataSet(ResultSet resultSet, Set ignoreColumns) throws Exception { + int realColumnCount = resultSet.getMetaData().getColumnCount(); + List columnOrder = new ArrayList<>(); + int columnCount = 0; + for (int i = 1; i <= realColumnCount; i++) { + String columnName = resultSet.getMetaData().getColumnName(i); + int columnType = resultSet.getMetaData().getColumnType(i); + int precision = resultSet.getMetaData().getPrecision(i); + int scale = resultSet.getMetaData().getScale(i); + if (ignoreColumns.stream().anyMatch(columnName::equalsIgnoreCase)) { + continue; + } + columnCount += 1; + columnOrder.add(new DefaultDataColumn(columnName, columnType, precision, scale)); + } + List> results = new ArrayList<>(); + while (resultSet.next()) { + List row = new ArrayList<>(); + for (DataColumn dataColumn : columnOrder) { + int idx = resultSet.findColumn(dataColumn.getColumnName()); + Object object = resultSet.getObject(idx); + //TODO: improve CLOB/BLOB handling + Object mappedValue = databaseMigrationDataTypeMapperService.dataTypeMapper(object, resultSet.getMetaData().getColumnType(idx)); + row.add(mappedValue); + } + results.add(row); + } + return new DefaultDataSet(columnCount, columnOrder, results); + } + + @Override + public void disableIndexesOfTable(String table) throws SQLException { + try (Connection connection = getConnection(); + Statement stmt = connection.createStatement(); + ResultSet resultSet = stmt.executeQuery(getDisableIndexesScript(table)) + ) { + while (resultSet.next()) { + String q = resultSet.getString(1); + LOG.debug("Running query: {}", q); + executeUpdateAndCommit(q); + } + } + } + + @Override + public void enableIndexesOfTable(String table) throws SQLException { + try (Connection connection = getConnection(); + Statement stmt = connection.createStatement(); + ResultSet resultSet = stmt.executeQuery(getEnableIndexesScript(table)) + ) { + while (resultSet.next()) { + String q = resultSet.getString(1); + LOG.debug("Running query: {}", q); + executeUpdateAndCommit(q); + } + } + } + + @Override + public void dropIndexesOfTable(String table) throws SQLException { + try (Connection connection = getConnection(); + Statement stmt = connection.createStatement(); + ResultSet resultSet = stmt.executeQuery(getDropIndexesScript(table)) + ) { + while (resultSet.next()) { + String q = resultSet.getString(1); + LOG.debug("Running query: {}", q); + executeUpdateAndCommit(q); + } + } + } + + protected String getDisableIndexesScript(String table) { + throw new UnsupportedOperationException("not implemented"); + + + } + + protected String getEnableIndexesScript(String table) { + throw new UnsupportedOperationException("not implemented"); + } + + protected String getDropIndexesScript(String table) { + throw new UnsupportedOperationException("not implemented"); + } + + @Override + public Platform asPlatform() { + return asPlatform(false); + } + + @Override + public Platform asPlatform(boolean reload) { + //TODO all properties to be set and check + if (this.platform == null || reload) { + final DatabaseSettings databaseSettings = new DatabaseSettings(getDatabaseProvider(), getDataSourceConfiguration().getConnectionString(), getDataSourceConfiguration().getDriver(), getDataSourceConfiguration().getUserName(), getDataSourceConfiguration().getPassword(), getDataSourceConfiguration().getTablePrefix(), ";"); + this.platform = createPlatform(databaseSettings, getDataSource()); + addCustomPlatformTypeMapping(this.platform); + } + return this.platform; + } + + protected Platform createPlatform(DatabaseSettings databaseSettings, DataSource dataSource) { + return HybrisPlatformFactory.createInstance(databaseSettings, dataSource); + } + + + protected void addCustomPlatformTypeMapping(Platform platform) { + } + + @Override + public Database asDatabase() { + return asDatabase(false); + } + + @Override + public Database asDatabase(boolean reload) { + if (this.database == null || reload) { + this.database = getDatabase(reload); + } + return this.database; + } + + protected Database getDatabase(boolean reload) { + String schema = getDataSourceConfiguration().getSchema(); + return asPlatform(reload).readModelFromDatabase(getDataSourceConfiguration().getProfile(), null, + schema, null); + } + + @Override + public Set getAllTableNames() throws SQLException { + Set allTableNames = new TreeSet<>(String.CASE_INSENSITIVE_ORDER); + String allTableNamesQuery = createAllTableNamesQuery(); + try (Connection connection = getConnection(); + Statement stmt = connection.createStatement(); + ResultSet resultSet = stmt.executeQuery(allTableNamesQuery) + ) { + while (resultSet.next()) { + String tableName = resultSet.getString(1); + if (!StringUtils.startsWithIgnoreCase(tableName, CommercedbsyncConstants.MIGRATION_TABLESPREFIX)) { + allTableNames.add(resultSet.getString(1)); + } + } + } + return allTableNames; + } + + @Override + public Set getAllTypeSystemTables() throws SQLException { + if (StringUtils.isEmpty(getDataSourceConfiguration().getTypeSystemName())) { + throw new RuntimeException("No type system name specified. Check the properties"); + } + String tablePrefix = getDataSourceConfiguration().getTablePrefix(); + String yDeploymentsTable = StringUtils.defaultIfBlank(tablePrefix, "") + CommercedbsyncConstants.DEPLOYMENTS_TABLE; + Set allTableNames = getAllTableNames(); + if (!allTableNames.contains(yDeploymentsTable)) { + return Collections.emptySet(); + } + String allTypeSystemTablesQuery = String.format("SELECT * FROM %s WHERE Typecode IS NOT NULL AND TableName IS NOT NULL AND TypeSystemName = '%s'", yDeploymentsTable, getDataSourceConfiguration().getTypeSystemName()); + Set allTypeSystemTables = new HashSet<>(); + try (Connection connection = getConnection(); + Statement stmt = connection.createStatement(); + ResultSet resultSet = stmt.executeQuery(allTypeSystemTablesQuery) + ) { + while (resultSet.next()) { + TypeSystemTable typeSystemTable = new TypeSystemTable(); + String name = resultSet.getString("Name"); + String tableName = resultSet.getString("TableName"); + typeSystemTable.setTypeCode(resultSet.getString("Typecode")); + typeSystemTable.setTableName(tableName); + typeSystemTable.setName(name); + typeSystemTable.setTypeSystemName(resultSet.getString("TypeSystemName")); + typeSystemTable.setAuditTableName(resultSet.getString("AuditTableName")); + typeSystemTable.setPropsTableName(resultSet.getString("PropsTableName")); + typeSystemTable.setTypeSystemSuffix(detectTypeSystemSuffix(tableName, name)); + typeSystemTable.setTypeSystemRelatedTable(PersistenceInformation.isTypeSystemRelatedDeployment(name)); + allTypeSystemTables.add(typeSystemTable); + } + } + return allTypeSystemTables; + } + + private String detectTypeSystemSuffix(String tableName, String name) { + if (PersistenceInformation.isTypeSystemRelatedDeployment(name)) { + return getDataSourceConfiguration().getTypeSystemSuffix(); + } + return StringUtils.EMPTY; + } + + @Override + public boolean isAuditTable(String table) throws Exception { + String tablePrefix = getDataSourceConfiguration().getTablePrefix(); + String query = String.format("SELECT count(*) from %s%s WHERE AuditTableName = ? OR AuditTableName = ?", StringUtils.defaultIfBlank(tablePrefix, ""), CommercedbsyncConstants.DEPLOYMENTS_TABLE); + try (Connection connection = getConnection(); + PreparedStatement stmt = connection.prepareStatement(query); + ) { + stmt.setObject(1, StringUtils.removeStartIgnoreCase(table, tablePrefix)); + stmt.setObject(2, table); + try (ResultSet rs = stmt.executeQuery()) { + boolean isAudit = false; + if (rs.next()) { + isAudit = rs.getInt(1) > 0; + } + return isAudit; + } + } + } + + protected abstract String createAllTableNamesQuery(); + + @Override + public Set getAllColumnNames(String table) throws SQLException { + String allColumnNamesQuery = createAllColumnNamesQuery(table); + Set allColumnNames = new TreeSet<>(String.CASE_INSENSITIVE_ORDER); + try (Connection connection = getConnection(); + Statement stmt = connection.createStatement(); + ResultSet resultSet = stmt.executeQuery(allColumnNamesQuery) + ) { + while (resultSet.next()) { + allColumnNames.add(resultSet.getString(1)); + } + } + return allColumnNames; + } + + protected abstract String createAllColumnNamesQuery(String table); + + @Override + public DataSet getBatchWithoutIdentifier(OffsetQueryDefinition queryDefinition) throws Exception { + return getBatchWithoutIdentifier(queryDefinition, null); + } + + @Override + public DataSet getBatchWithoutIdentifier(OffsetQueryDefinition queryDefinition, Instant time) throws Exception { + //get batches with modifiedts >= configured time for incremental migration + List conditionsList = new ArrayList<>(1); + processDefaultConditions(queryDefinition.getTable(), conditionsList); + if (time != null) { + conditionsList.add("modifiedts > ?"); + } + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection(); + PreparedStatement stmt = connection.prepareStatement(buildOffsetBatchQuery(queryDefinition, conditions))) { + stmt.setFetchSize(Long.valueOf(queryDefinition.getBatchSize()).intValue()); + if (time != null) { + stmt.setTimestamp(1, Timestamp.from(time)); + } + ResultSet resultSet = stmt.executeQuery(); + return convertToBatchDataSet(resultSet); + } + } + + @Override + public DataSet getBatchOrderedByColumn(SeekQueryDefinition queryDefinition) throws Exception { + return getBatchOrderedByColumn(queryDefinition, null); + } + + @Override + public DataSet getBatchOrderedByColumn(SeekQueryDefinition queryDefinition, Instant time) throws Exception { + //get batches with modifiedts >= configured time for incremental migration + List conditionsList = new ArrayList<>(2); + processDefaultConditions(queryDefinition.getTable(), conditionsList); + if (time != null) { + conditionsList.add("modifiedts > ?"); + } + if (queryDefinition.getLastColumnValue() != null) { + conditionsList.add(String.format("%s >= %s", queryDefinition.getColumn(), queryDefinition.getLastColumnValue())); + } + if (queryDefinition.getNextColumnValue() != null) { + conditionsList.add(String.format("%s < %s", queryDefinition.getColumn(), queryDefinition.getNextColumnValue())); + } + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection(); + PreparedStatement stmt = connection.prepareStatement(buildValueBatchQuery(queryDefinition, conditions))) { + stmt.setFetchSize(Long.valueOf(queryDefinition.getBatchSize()).intValue()); + if (time != null) { + stmt.setTimestamp(1, Timestamp.from(time)); + } + ResultSet resultSet = stmt.executeQuery(); + return convertToBatchDataSet(resultSet); + } + } + + + @Override + public DataSet getBatchMarkersOrderedByColumn(MarkersQueryDefinition queryDefinition) throws Exception { + return getBatchMarkersOrderedByColumn(queryDefinition, null); + } + + @Override + public DataSet getBatchMarkersOrderedByColumn(MarkersQueryDefinition queryDefinition, Instant time) throws Exception { + //get batches with modifiedts >= configured time for incremental migration + List conditionsList = new ArrayList<>(2); + processDefaultConditions(queryDefinition.getTable(), conditionsList); + if (time != null) { + conditionsList.add("modifiedts > ?"); + } + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection(); + PreparedStatement stmt = connection.prepareStatement(buildBatchMarkersQuery(queryDefinition, conditions))) { + stmt.setFetchSize(Long.valueOf(queryDefinition.getBatchSize()).intValue()); + if (time != null) { + stmt.setTimestamp(1, Timestamp.from(time)); + } + ResultSet resultSet = stmt.executeQuery(); + return convertToBatchDataSet(resultSet); + } + } + + @Override + public DataSet getUniqueColumns(String table) throws Exception { + try (Connection connection = getConnection(); + Statement stmt = connection.createStatement()) { + ResultSet resultSet = stmt.executeQuery(createUniqueColumnsQuery(table)); + return convertToDataSet(resultSet); + } + } + + protected abstract String buildOffsetBatchQuery(OffsetQueryDefinition queryDefinition, String... conditions); + + protected abstract String buildValueBatchQuery(SeekQueryDefinition queryDefinition, String... conditions); + + protected abstract String buildBatchMarkersQuery(MarkersQueryDefinition queryDefinition, String... conditions); + + protected abstract String createUniqueColumnsQuery(String tableName); + + protected void processDefaultConditions(String table, List conditionsList) { + String tsCondition = getTsCondition(table); + if (StringUtils.isNotEmpty(tsCondition)) { + conditionsList.add(tsCondition); + } + } + + + private String getTsCondition(String table) { + Objects.requireNonNull(table); + if (table.toLowerCase().endsWith(CommercedbsyncConstants.DEPLOYMENTS_TABLE)) { + return String.format("TypeSystemName = '%s'", getDataSourceConfiguration().getTypeSystemName()); + } + return null; + } + + protected String expandConditions(String[] conditions) { + if (conditions == null || conditions.length == 0) { + return "1=1"; + } else { + return Joiner.on(" and ").join(conditions); + } + } + + protected DataSet convertToBatchDataSet(ResultSet resultSet) throws Exception { + return convertToDataSet(resultSet); + } + + @Override + public boolean validateConnection() throws Exception { + try (Connection connection = getConnection()) { + return connection.isValid(120); + } + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/AzureDataRepository.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/AzureDataRepository.java new file mode 100644 index 0000000..47fac4e --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/AzureDataRepository.java @@ -0,0 +1,176 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.impl; + +import com.google.common.base.Joiner; +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationDataTypeMapperService; +import de.hybris.bootstrap.ddl.DataBaseProvider; +import de.hybris.bootstrap.ddl.DatabaseSettings; +import de.hybris.bootstrap.ddl.HybrisPlatform; +import org.apache.commons.lang.StringUtils; +import org.apache.ddlutils.Platform; +import com.sap.cx.boosters.commercedbsync.MarkersQueryDefinition; +import com.sap.cx.boosters.commercedbsync.OffsetQueryDefinition; +import com.sap.cx.boosters.commercedbsync.SeekQueryDefinition; +import com.sap.cx.boosters.commercedbsync.repository.platform.MigrationHybrisMSSqlPlatform; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.sql.DataSource; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Types; + +public class AzureDataRepository extends AbstractDataRepository { + + private static final Logger LOG = LoggerFactory.getLogger(AzureDataRepository.class); + + private static final String LP_SUFFIX = "lp"; + + public AzureDataRepository(DataSourceConfiguration dataSourceConfiguration, DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService) { + super(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } + + @Override + protected void addCustomPlatformTypeMapping(Platform platform) { + platform.getPlatformInfo().addNativeTypeMapping(Types.NCLOB, "NVARCHAR(MAX)"); + platform.getPlatformInfo().addNativeTypeMapping(Types.CLOB, "NVARCHAR(MAX)"); + platform.getPlatformInfo().addNativeTypeMapping(Types.LONGVARCHAR, "NVARCHAR(MAX)"); + platform.getPlatformInfo().addNativeTypeMapping(Types.VARBINARY, "VARBINARY"); + platform.getPlatformInfo().addNativeTypeMapping(Types.REAL, "float"); + platform.getPlatformInfo().addNativeTypeMapping(Types.LONGVARBINARY, "VARBINARY(MAX)"); + platform.getPlatformInfo().addNativeTypeMapping(Types.NCHAR, "NVARCHAR"); + platform.getPlatformInfo().setHasSize(Types.NCHAR, true); + platform.getPlatformInfo().setHasSize(Types.VARBINARY, true); + platform.getPlatformInfo().setHasSize(Types.NVARCHAR, true); + platform.getPlatformInfo().setHasPrecisionAndScale(Types.REAL, false); + } + + @Override + protected String buildOffsetBatchQuery(OffsetQueryDefinition queryDefinition, String... conditions) { + String orderBy = Joiner.on(',').join(queryDefinition.getAllColumns()); + return String.format("SELECT * FROM %s WHERE %s ORDER BY %s OFFSET %s ROWS FETCH NEXT %s ROWS ONLY", queryDefinition.getTable(), expandConditions(conditions), orderBy, queryDefinition.getOffset(), queryDefinition.getBatchSize()); + } + + @Override + protected String buildValueBatchQuery(SeekQueryDefinition queryDefinition, String... conditions) { + return String.format("select top %s * from %s where %s order by %s", queryDefinition.getBatchSize(), queryDefinition.getTable(), expandConditions(conditions), queryDefinition.getColumn()); + } + + @Override + protected String buildBatchMarkersQuery(MarkersQueryDefinition queryDefinition, String... conditions) { + String column = queryDefinition.getColumn(); + String tableName = queryDefinition.getTable(); + if (queryDefinition.isLpTableEnabled()){ + tableName = getLpTableName(tableName); + } + return String.format("SELECT t.%s, t.rownum\n" + + "FROM\n" + + "(\n" + + " SELECT %s, (ROW_NUMBER() OVER (ORDER BY %s))-1 AS rownum\n" + + " FROM %s\n WHERE %s" + + ") AS t\n" + + "WHERE t.rownum %% %s = 0\n" + + "ORDER BY t.%s", column, column, column, tableName, expandConditions(conditions), queryDefinition.getBatchSize(), column); + } + + @Override + protected String createAllTableNamesQuery() { + return String.format( + "SELECT DISTINCT TABLE_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = '%s'", + getDataSourceConfiguration().getSchema()); + } + + @Override + protected String createAllColumnNamesQuery(String tableName) { + return String.format( + "SELECT DISTINCT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = '%s' AND TABLE_NAME = '%s'", + getDataSourceConfiguration().getSchema(), tableName); + } + + @Override + protected String getDisableIndexesScript(String table) { + return String.format("SELECT 'ALTER INDEX ' + QUOTENAME(I.name) + ' ON ' + QUOTENAME(SCHEMA_NAME(T.schema_id))+'.'+ QUOTENAME(T.name) + ' DISABLE' FROM sys.indexes I INNER JOIN sys.tables T ON I.object_id = T.object_id WHERE I.type_desc = 'NONCLUSTERED' AND I.name IS NOT NULL AND I.is_disabled = 0 AND T.name = '%s'", table); + } + + @Override + protected String getEnableIndexesScript(String table) { + return String.format("SELECT 'ALTER INDEX ' + QUOTENAME(I.name) + ' ON ' + QUOTENAME(SCHEMA_NAME(T.schema_id))+'.'+ QUOTENAME(T.name) + ' REBUILD' FROM sys.indexes I INNER JOIN sys.tables T ON I.object_id = T.object_id WHERE I.type_desc = 'NONCLUSTERED' AND I.name IS NOT NULL AND I.is_disabled = 1 AND T.name = '%s'", table); + } + + @Override + protected String getDropIndexesScript(String table) { + return String.format("SELECT 'DROP INDEX ' + QUOTENAME(I.name) + ' ON ' + QUOTENAME(SCHEMA_NAME(T.schema_id))+'.'+ QUOTENAME(T.name) FROM sys.indexes I INNER JOIN sys.tables T ON I.object_id = T.object_id WHERE I.type_desc = 'NONCLUSTERED' AND I.name IS NOT NULL AND T.name = '%s'", table); + } + + @Override + public float getDatabaseUtilization() throws SQLException { + String query = "SELECT TOP 1 end_time, (SELECT Max(v) FROM (VALUES (avg_cpu_percent),(avg_data_io_percent),(avg_log_write_percent)) AS value(v)) AS [avg_DTU_percent] FROM sys.dm_db_resource_stats ORDER by end_time DESC;"; + try (Connection connection = getConnection(); + Statement stmt = connection.createStatement(); + ResultSet resultSet = stmt.executeQuery(query); + ) { + if (resultSet.next()) { + return resultSet.getFloat("avg_DTU_percent"); + } else { + //LOG.debug("There are no data with regard to Azure DTU"); + return -1; + } + } catch (Exception e) { + LOG.trace("could not load database utilization stats"); + return -1; + } + } + + @Override + protected Platform createPlatform(DatabaseSettings databaseSettings, DataSource dataSource) { + HybrisPlatform instance = MigrationHybrisMSSqlPlatform.build(databaseSettings); + instance.setDataSource(dataSource); + return instance; + } + + @Override + protected String createUniqueColumnsQuery(String tableName) { + return String.format("SELECT col.name FROM (\n" + + "SELECT TOP (1)\n" + + " SchemaName = t.schema_id,\n" + + " ObjectId = ind.object_id,\n" + + " IndexId = ind.index_id,\n" + + " TableName = t.name,\n" + + " IndexName = ind.name,\n" + + " ColCount = count(*)\n" + + "FROM \n" + + " sys.indexes ind \n" + + "INNER JOIN \n" + + " sys.tables t ON ind.object_id = t.object_id \n" + + "WHERE \n" + + " t.name = '%s'\n" + + " AND\n" + + " SCHEMA_NAME(t.schema_id) = '%s'\n" + + " AND\n" + + " ind.is_unique = 1\n" + + "GROUP BY t.schema_id,ind.object_id,ind.index_id,t.name,ind.name\n" + + "ORDER BY ColCount ASC\n" + + ") t1\n" + + "INNER JOIN \n" + + " sys.index_columns ic ON t1.ObjectId = ic.object_id and t1.IndexId = ic.index_id \n" + + "INNER JOIN \n" + + " sys.columns col ON ic.object_id = col.object_id and ic.column_id = col.column_id ", tableName, getDataSourceConfiguration().getSchema()); + } + + @Override + public DataBaseProvider getDatabaseProvider() { + return DataBaseProvider.MSSQL; + } + + private String getLpTableName(String tableName){ + return StringUtils.removeEndIgnoreCase(tableName,LP_SUFFIX); + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/AzureIncrementalDataRepository.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/AzureIncrementalDataRepository.java new file mode 100644 index 0000000..0e683d2 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/AzureIncrementalDataRepository.java @@ -0,0 +1,414 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.impl; + +import com.google.common.base.Joiner; +import com.sap.cx.boosters.commercedbsync.dataset.DataSet; +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationDataTypeMapperService; +import de.hybris.platform.util.Config; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Timestamp; +import java.time.Instant; +import java.util.ArrayList; +import java.util.List; +import java.util.stream.Collectors; +import org.apache.commons.lang.StringUtils; +import com.sap.cx.boosters.commercedbsync.MarkersQueryDefinition; +import com.sap.cx.boosters.commercedbsync.OffsetQueryDefinition; +import com.sap.cx.boosters.commercedbsync.SeekQueryDefinition; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class AzureIncrementalDataRepository extends AzureDataRepository{ + + private static final Logger LOG = LoggerFactory.getLogger(AzureIncrementalDataRepository.class); + + private static final String LP_SUFFIX = "lp"; + + private static final String PK = "PK"; + + private static String deletionTable = Config.getParameter("db.tableprefix") == null ? "" : Config.getParameter("db.tableprefix")+ "itemdeletionmarkers"; + + public AzureIncrementalDataRepository( + DataSourceConfiguration dataSourceConfiguration, + DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService) { + super(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } + @Override + protected String buildOffsetBatchQuery(OffsetQueryDefinition queryDefinition, String... conditions) { + + if(queryDefinition.isDeletionEnabled()) { + return buildOffsetBatchQueryForDeletion(queryDefinition,conditions); + } else if(queryDefinition.isLpTableEnabled()) { + return buildOffsetBatchQueryForLp(queryDefinition,conditions); + } + else { + return super.buildOffsetBatchQuery(queryDefinition,conditions); + } + } + + private String buildOffsetBatchQueryForLp(OffsetQueryDefinition queryDefinition, String... conditions) { + String orderBy = PK; + return String.format("SELECT * FROM %s WHERE %s ORDER BY %s OFFSET %s ROWS FETCH NEXT %s ROWS ONLY", getLpTableName(queryDefinition.getTable()), expandConditions(conditions), orderBy, queryDefinition.getOffset(), queryDefinition.getBatchSize()); + } + + private String buildOffsetBatchQueryForDeletion(OffsetQueryDefinition queryDefinition, String... conditions) { + String orderBy = Joiner.on(',').join(queryDefinition.getAllColumns()); + return String.format("SELECT * FROM %s WHERE %s ORDER BY %s OFFSET %s ROWS FETCH NEXT %s ROWS ONLY", deletionTable, expandConditions(conditions), orderBy, queryDefinition.getOffset(), queryDefinition.getBatchSize()); + } + + @Override + protected String buildValueBatchQuery(SeekQueryDefinition queryDefinition, String... conditions) { + if(queryDefinition.isDeletionEnabled()) { + return buildValueBatchQueryForDeletion(queryDefinition,conditions); + } else { + return super.buildValueBatchQuery(queryDefinition,conditions); + } + } + + @Override + protected String buildBatchMarkersQuery(MarkersQueryDefinition queryDefinition, String... conditions) { + if(queryDefinition.isDeletionEnabled()) { + return buildBatchMarkersQueryForDeletion(queryDefinition,conditions); + } else if(queryDefinition.isLpTableEnabled()) { + return super.buildBatchMarkersQuery(queryDefinition,conditions); + } else { + return super.buildBatchMarkersQuery(queryDefinition,conditions); + } + } + + @Override + public DataSet getBatchOrderedByColumn(SeekQueryDefinition queryDefinition, Instant time) throws Exception { + if(queryDefinition.isDeletionEnabled()) { + return getBatchOrderedByColumnForDeletion(queryDefinition,time); + } else if(queryDefinition.isLpTableEnabled()){ + return getBatchOrderedByColumnForLptable(queryDefinition,time); + } else { + return super.getBatchOrderedByColumn(queryDefinition,time); + } + } + + private String buildValueBatchQueryForDeletion(SeekQueryDefinition queryDefinition, String... conditions) { + return String.format("select top %s * from %s where %s order by %s", queryDefinition.getBatchSize(), deletionTable, expandConditions(conditions), queryDefinition.getColumn()); + } + + private DataSet getBatchOrderedByColumnForLptable(SeekQueryDefinition queryDefinition, Instant time) throws Exception { + //get batches with modifiedts >= configured time for incremental migration + List conditionsList = new ArrayList<>(2); + processDefaultConditions(queryDefinition.getTable(), conditionsList); + if (time != null) { + conditionsList.add("modifiedts > ?"); + } + if (queryDefinition.getLastColumnValue() != null) { + conditionsList.add(String.format("%s >= %s", queryDefinition.getColumn(), queryDefinition.getLastColumnValue())); + } + if (queryDefinition.getNextColumnValue() != null) { + conditionsList.add(String.format("%s < %s", queryDefinition.getColumn(), queryDefinition.getNextColumnValue())); + } + String[] conditions = null; + List pkList = new ArrayList(); + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connectionForPk = getConnection(); + PreparedStatement stmt = connectionForPk.prepareStatement(buildValueBatchQueryForLptable(queryDefinition, conditions))) { + stmt.setFetchSize(Long.valueOf(queryDefinition.getBatchSize()).intValue()); + if (time != null) { + stmt.setTimestamp(1, Timestamp.from(time)); + } + ResultSet pkResultSet = stmt.executeQuery(); + pkList = convertToPkListForLpTable(pkResultSet); + } + + // migrating LP Table no + try (Connection connection = getConnection(); + PreparedStatement stmt = connection.prepareStatement(buildValueBatchQueryForLptableWithPK(queryDefinition,pkList, conditions))) { + // stmt.setFetchSize(Long.valueOf(queryDefinition.getBatchSize()).intValue()); + ResultSet resultSet = stmt.executeQuery(); + return convertToBatchDataSet(resultSet); + } + } + + private List convertToPkListForLpTable(ResultSet resultSet) throws Exception { + List pkList = new ArrayList<>(); + while (resultSet.next()) { + int idx = resultSet.findColumn(PK); + pkList.add(resultSet.getString(idx)); + } + return pkList; + } + + private String buildValueBatchQueryForLptableWithPK(SeekQueryDefinition queryDefinition, List pkList, String... conditions ) { + + StringBuilder sqlBuilder = new StringBuilder(); + sqlBuilder.append(String.format("select * from %s where ", queryDefinition.getTable())); + sqlBuilder.append("\n"); + sqlBuilder.append(String.format("ITEMPK in (%s) " , Joiner.on(',').join(pkList.stream().map(column -> " " + column).collect(Collectors.toList())))); + sqlBuilder.append(String.format("%s order by %s ", expandConditions(conditions), queryDefinition.getColumn())); + sqlBuilder.append(";"); + + return sqlBuilder.toString(); + } + + private String buildValueBatchQueryForLptableWithPK(OffsetQueryDefinition queryDefinition, List pkList, String... conditions ) { + + StringBuilder sqlBuilder = new StringBuilder(); + sqlBuilder.append(String.format("select * from %s where ", queryDefinition.getTable())); + sqlBuilder.append("\n"); + sqlBuilder.append(String.format("ITEMPK in (%s) " , Joiner.on(',').join(pkList.stream().map(column -> " " + column).collect(Collectors.toList())))); + sqlBuilder.append(";"); + + return sqlBuilder.toString(); + } + private String buildValueBatchQueryForLptable(SeekQueryDefinition queryDefinition, String... conditions) { + return String.format("select top %s PK from %s where %s order by %s", queryDefinition.getBatchSize(), getLpTableName(queryDefinition.getTable()), expandConditions(conditions), queryDefinition.getColumn()); + } + + private String buildOffsetBatchQueryForLptable(OffsetQueryDefinition queryDefinition, String... conditions) { + String orderBy = PK; + return String.format("SELECT PK FROM %s WHERE %s ORDER BY %s OFFSET %s ROWS FETCH NEXT %s ROWS ONLY", getLpTableName(queryDefinition.getTable()), expandConditions(conditions), orderBy, queryDefinition.getOffset(), queryDefinition.getBatchSize()); + } + + private DataSet getBatchOrderedByColumnForDeletion(SeekQueryDefinition queryDefinition, Instant time) throws Exception { + + //get batches with modifiedts >= configured time for incremental migration + List conditionsList = new ArrayList<>(3); + if (time != null) { + conditionsList.add("modifiedts > ?"); + } + conditionsList.add("p_table = ?"); + if (queryDefinition.getLastColumnValue() != null) { + conditionsList.add(String.format("%s >= %s", queryDefinition.getColumn(), queryDefinition.getLastColumnValue())); + } + if (queryDefinition.getNextColumnValue() != null) { + conditionsList.add(String.format("%s < %s", queryDefinition.getColumn(), queryDefinition.getNextColumnValue())); + } + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection(); + PreparedStatement stmt = connection.prepareStatement(buildValueBatchQuery(queryDefinition, conditions))) { + stmt.setFetchSize(Long.valueOf(queryDefinition.getBatchSize()).intValue()); + if (time != null) { + stmt.setTimestamp(1, Timestamp.from(time)); + } + // setting table for the deletions + stmt.setString(2,queryDefinition.getTable()); + + ResultSet resultSet = stmt.executeQuery(); + return convertToBatchDataSet(resultSet); + } + } + + @Override + public DataSet getBatchWithoutIdentifier(OffsetQueryDefinition queryDefinition, Instant time) throws Exception { + + if(queryDefinition.isDeletionEnabled()) { + return getBatchWithoutIdentifierForDeletion(queryDefinition,time); + } else if(queryDefinition.isLpTableEnabled()){ + return getBatchWithoutIdentifierForLptable(queryDefinition,time); + } else { + return super.getBatchWithoutIdentifier(queryDefinition,time); + } + } + + private DataSet getBatchWithoutIdentifierForDeletion(OffsetQueryDefinition queryDefinition, Instant time) throws Exception { + //get batches with modifiedts >= configured time for incremental migration + List conditionsList = new ArrayList<>(2); + + if (time != null) { + conditionsList.add("modifiedts > ?"); + } + conditionsList.add("p_table = ?"); + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection(); + PreparedStatement stmt = connection.prepareStatement(buildOffsetBatchQuery(queryDefinition, conditions))) { + stmt.setFetchSize(Long.valueOf(queryDefinition.getBatchSize()).intValue()); + if (time != null) { + stmt.setTimestamp(1, Timestamp.from(time)); + } + // setting table for the deletions + stmt.setString(2,queryDefinition.getTable()); + + ResultSet resultSet = stmt.executeQuery(); + return convertToBatchDataSet(resultSet); + } + } + + private DataSet getBatchWithoutIdentifierForLptable(OffsetQueryDefinition queryDefinition, Instant time) throws Exception { + //get batches with modifiedts >= configured time for incremental migration + List conditionsList = new ArrayList<>(1); + processDefaultConditions(queryDefinition.getTable(), conditionsList); + if (time != null) { + conditionsList.add("modifiedts > ?"); + } + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + List pkList = new ArrayList(); + try (Connection connectionForPk = getConnection(); + PreparedStatement stmt = connectionForPk.prepareStatement(buildOffsetBatchQueryForLptable(queryDefinition, conditions))) { + stmt.setFetchSize(Long.valueOf(queryDefinition.getBatchSize()).intValue()); + if (time != null) { + stmt.setTimestamp(1, Timestamp.from(time)); + } + ResultSet pkResultSet = stmt.executeQuery(); + pkList = convertToPkListForLpTable(pkResultSet); + } + + // migrating LP Table no + try (Connection connection = getConnection(); + PreparedStatement stmt = connection.prepareStatement(buildValueBatchQueryForLptableWithPK(queryDefinition,pkList, conditions))) { + // stmt.setFetchSize(Long.valueOf(queryDefinition.getBatchSize()).intValue()); + ResultSet resultSet = stmt.executeQuery(); + return convertToBatchDataSet(resultSet); + } + + } + + @Override + public DataSet getBatchMarkersOrderedByColumn(MarkersQueryDefinition queryDefinition, Instant time) throws Exception { + + if(!queryDefinition.isDeletionEnabled()) { + return super.getBatchMarkersOrderedByColumn(queryDefinition,time); + } + //get batches with modifiedts >= configured time for incremental migration + List conditionsList = new ArrayList<>(2); + processDefaultConditions(queryDefinition.getTable(), conditionsList); + if (time != null) { + conditionsList.add("modifiedts > ?"); + } + // setting table for the deletions + conditionsList.add("p_table = ?"); + + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection(); + PreparedStatement stmt = connection.prepareStatement(buildBatchMarkersQuery(queryDefinition, conditions))) { + stmt.setFetchSize(Long.valueOf(queryDefinition.getBatchSize()).intValue()); + if (time != null) { + stmt.setTimestamp(1, Timestamp.from(time)); + } + // setting table for the deletions + stmt.setString(2,queryDefinition.getTable()); + + ResultSet resultSet = stmt.executeQuery(); + return convertToBatchDataSet(resultSet); + } + } + + @Override + public long getRowCountModifiedAfter(String table, Instant time,boolean isDeletionEnabled, boolean lpTableMigrationEnabled) throws SQLException { + if(isDeletionEnabled) { + return getRowCountModifiedAfterforDeletion(table,time); + } else if(lpTableMigrationEnabled) { + return getRowCountModifiedAfterforLpTable(table,time); + } + else{ + return super.getRowCountModifiedAfter(table,time,false,false); + + } + } + + private long getRowCountModifiedAfterforLpTable(String table, Instant time) throws SQLException { + List conditionsList = new ArrayList<>(1); + processDefaultConditions(table, conditionsList); + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection()) { + try (PreparedStatement stmt = connection.prepareStatement(String.format("select count(*) from %s where modifiedts > ? AND %s", getLpTableName(table), expandConditions(conditions)))) { + stmt.setTimestamp(1, Timestamp.from(time)); + ResultSet resultSet = stmt.executeQuery(); + long value = 0; + if (resultSet.next()) { + value = resultSet.getLong(1); + } + return value; + } + } + } + + private long getRowCountModifiedAfterforDeletion(String table, Instant time) throws SQLException { + // + List conditionsList = new ArrayList<>(2); + processDefaultConditions(table, conditionsList); + // setting table for the deletions + conditionsList.add("p_table = ?"); + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection()) { + try (PreparedStatement stmt = connection.prepareStatement(String.format("select count(*) from %s where modifiedts > ? AND %s", deletionTable, expandConditions(conditions)))) { + stmt.setTimestamp(1, Timestamp.from(time)); + // setting table for the deletions + stmt.setString(2,table); + ResultSet resultSet = stmt.executeQuery(); + long value = 0; + if (resultSet.next()) { + value = resultSet.getLong(1); + } + return value; + } + } + } + + private String buildBatchMarkersQueryForDeletion(MarkersQueryDefinition queryDefinition, String... conditions) { + String column = queryDefinition.getColumn(); + return String.format("SELECT t.%s, t.rownum\n" + + "FROM\n" + + "(\n" + + " SELECT %s, (ROW_NUMBER() OVER (ORDER BY %s))-1 AS rownum\n" + + " FROM %s\n WHERE %s" + + ") AS t\n" + + "WHERE t.rownum %% %s = 0\n" + + "ORDER BY t.%s", column, column, column, deletionTable, expandConditions(conditions), queryDefinition.getBatchSize(), column); + } + + private long getRowCountModifiedAfterForLP(String table, Instant time) throws SQLException { + List conditionsList = new ArrayList<>(1); + + if (! StringUtils.endsWithIgnoreCase(table,LP_SUFFIX)) { + return super.getRowCountModifiedAfter(table,time,false,false); + } + table = StringUtils.removeEndIgnoreCase(table,LP_SUFFIX); + + processDefaultConditions(table, conditionsList); + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection()) { + try (PreparedStatement stmt = connection.prepareStatement(String.format("select count(*) from %s where modifiedts > ? AND %s", table, expandConditions(conditions)))) { + stmt.setTimestamp(1, Timestamp.from(time)); + ResultSet resultSet = stmt.executeQuery(); + long value = 0; + if (resultSet.next()) { + value = resultSet.getLong(1); + } + return value; + } + } + } + + private String getLpTableName(String tableName){ + return StringUtils.removeEndIgnoreCase(tableName,LP_SUFFIX); + } + } diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/DataIncrementalRepositoryFactory.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/DataIncrementalRepositoryFactory.java new file mode 100644 index 0000000..a1a5391 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/DataIncrementalRepositoryFactory.java @@ -0,0 +1,43 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.impl; + +import com.google.common.base.Strings; +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import com.sap.cx.boosters.commercedbsync.repository.DataRepository; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationDataTypeMapperService; + +public class DataIncrementalRepositoryFactory extends DataRepositoryFactory { + + public DataIncrementalRepositoryFactory(DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService) { + super(databaseMigrationDataTypeMapperService); + } + + public DataRepository create(DataSourceConfiguration dataSourceConfiguration) + throws Exception { + String connectionString = dataSourceConfiguration.getConnectionString(); + if (Strings.isNullOrEmpty(connectionString)) { + throw new RuntimeException("No connection string provided for data source '" + dataSourceConfiguration.getProfile() + "'"); + } else { + String connectionStringLower = connectionString.toLowerCase(); + if (connectionStringLower.startsWith("jdbc:mysql")) { + return new MySQLIncrementalDataRepository(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } else if (connectionStringLower.startsWith("jdbc:sqlserver")) { + return new AzureIncrementalDataRepository(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } else if (connectionStringLower.startsWith("jdbc:oracle")) { + return new OracleDataRepository(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } else if (connectionStringLower.startsWith("jdbc:sap")) { + return new HanaDataRepository(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } else if (connectionStringLower.startsWith("jdbc:hsqldb")) { + return new HsqlRepository(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } else if (connectionStringLower.startsWith("jdbc:postgresql")) { + return new PostGresDataRepository(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } + } + throw new RuntimeException("Cannot handle connection string for " + connectionString); + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/DataRepositoryFactory.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/DataRepositoryFactory.java new file mode 100644 index 0000000..35435e8 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/DataRepositoryFactory.java @@ -0,0 +1,47 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.impl; + +import com.google.common.base.Strings; +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import com.sap.cx.boosters.commercedbsync.repository.DataRepository; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationDataTypeMapperService; + +public class DataRepositoryFactory { + + protected final DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService; + + public DataRepositoryFactory(DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService) { + this.databaseMigrationDataTypeMapperService = databaseMigrationDataTypeMapperService; + } + + public DataRepository create(DataSourceConfiguration dataSourceConfiguration) + throws Exception { + String connectionString = dataSourceConfiguration.getConnectionString(); + if (Strings.isNullOrEmpty(connectionString)) { + throw new RuntimeException("No connection string provided for data source '" + dataSourceConfiguration.getProfile() + "'"); + } else { + String connectionStringLower = connectionString.toLowerCase(); + if (connectionStringLower.startsWith("jdbc:mysql")) { + return new MySQLDataRepository(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } else if (connectionStringLower.startsWith("jdbc:sqlserver")) { + return new AzureDataRepository(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } else if (connectionStringLower.startsWith("jdbc:oracle")) { + return new OracleDataRepository(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } else if (connectionStringLower.startsWith("jdbc:sap")) { + return new HanaDataRepository(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } else if (connectionStringLower.startsWith("jdbc:hsqldb")) { + return new HsqlRepository(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } else if (connectionStringLower.startsWith("jdbc:postgresql")) { + return new PostGresDataRepository(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } else if (connectionStringLower.startsWith("jdbc:postgresql")) { + return new PostGresDataRepository(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } + } + throw new RuntimeException("Cannot handle connection string for " + connectionString); + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/HanaDataRepository.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/HanaDataRepository.java new file mode 100644 index 0000000..8cc44b6 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/HanaDataRepository.java @@ -0,0 +1,118 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.impl; + +import com.google.common.base.Joiner; +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import com.sap.cx.boosters.commercedbsync.repository.platform.MigrationHybrisHANAPlatform; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationDataTypeMapperService; +import de.hybris.bootstrap.ddl.DataBaseProvider; +import de.hybris.bootstrap.ddl.DatabaseSettings; +import de.hybris.bootstrap.ddl.HybrisPlatform; +import org.apache.ddlutils.Platform; +import com.sap.cx.boosters.commercedbsync.MarkersQueryDefinition; +import com.sap.cx.boosters.commercedbsync.OffsetQueryDefinition; +import com.sap.cx.boosters.commercedbsync.SeekQueryDefinition; +import org.springframework.core.io.Resource; +import org.springframework.jdbc.datasource.init.ResourceDatabasePopulator; + +import javax.sql.DataSource; +import java.sql.Types; + +public class HanaDataRepository extends AbstractDataRepository { + + public HanaDataRepository(DataSourceConfiguration dataSourceConfiguration, DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService) { + super(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } + + @Override + protected String buildOffsetBatchQuery(OffsetQueryDefinition queryDefinition, String... conditions) { + String orderBy = Joiner.on(',').join(queryDefinition.getAllColumns()); + return String.format("select * from %s where %s order by %s limit %s offset %s", queryDefinition.getTable(), expandConditions(conditions), orderBy, queryDefinition.getBatchSize(), queryDefinition.getOffset()); + } + + @Override + protected String buildValueBatchQuery(SeekQueryDefinition queryDefinition, String... conditions) { + return String.format("select * from %s where %s order by %s limit %s", queryDefinition.getTable(), expandConditions(conditions), queryDefinition.getColumn(), queryDefinition.getBatchSize()); + } + + @Override + protected String buildBatchMarkersQuery(MarkersQueryDefinition queryDefinition, String... conditions) { + String column = queryDefinition.getColumn(); + return String.format("SELECT t.%s, t.rownr as \"rownum\" \n" + + "FROM\n" + + "(\n" + + " SELECT %s, (ROW_NUMBER() OVER (ORDER BY %s))-1 AS rownr\n" + + " FROM %s\n WHERE %s" + + ") t\n" + + "WHERE mod(t.rownr,%s) = 0\n" + + "ORDER BY t.%s", column, column, column, queryDefinition.getTable(), expandConditions(conditions), queryDefinition.getBatchSize(), column); + } + + @Override + protected String createAllTableNamesQuery() { + return String.format("select distinct table_name from table_columns where lower(schema_name) = lower('%s') order by table_name", getDataSourceConfiguration().getSchema()); + } + + @Override + protected String createAllColumnNamesQuery(String table) { + return String.format("select distinct column_name from table_columns where lower(schema_name) = lower('%s') and lower(table_name) = lower('%s')", getDataSourceConfiguration().getSchema(), table); + } + @Override + public void runSqlScript(Resource resource) { + final ResourceDatabasePopulator databasePopulator = new ResourceDatabasePopulator(resource); + databasePopulator.setIgnoreFailedDrops(true); + databasePopulator.setSeparator("#"); + databasePopulator.execute(getDataSource()); + } + + @Override + protected String createUniqueColumnsQuery(String tableName) { + return String.format("SELECT t2.\"COLUMN_NAME\"\n" + + "FROM\n" + + "(\n" + + " SELECT * FROM (\n" + + " SELECT i.\"SCHEMA_NAME\", i.\"TABLE_NAME\", i.\"INDEX_NAME\", count(*) as \"COL_COUNT\"\n" + + " FROM INDEXES i\n" + + " INNER JOIN INDEX_COLUMNS c\n" + + " ON i.\"INDEX_NAME\" = c.\"INDEX_NAME\" AND i.\"SCHEMA_NAME\" = c.\"SCHEMA_NAME\" AND i.\"TABLE_NAME\" = c.\"TABLE_NAME\"\n" + + " WHERE \n" + + " lower(i.\"SCHEMA_NAME\") = lower('%s')\n" + + " AND\n" + + " lower(i.\"TABLE_NAME\") = lower('%s')\n" + + " AND(\n" + + " lower(i.\"CONSTRAINT\") = lower('UNIQUE') OR \n" + + " lower(i.\"CONSTRAINT\") = lower('PRIMARY KEY'))\n" + + " GROUP BY i.\"SCHEMA_NAME\", i.\"TABLE_NAME\", i.\"INDEX_NAME\"\n" + + " ORDER BY COL_COUNT ASC \n" + + " )\n" + + " LIMIT 1\n" + + ") t1\n" + + "INNER JOIN INDEX_COLUMNS t2\n" + + "ON t1.\"INDEX_NAME\" = t2.\"INDEX_NAME\" AND t1.\"SCHEMA_NAME\" = t2.\"SCHEMA_NAME\" AND t1.\"TABLE_NAME\" = t2.\"TABLE_NAME\"", getDataSourceConfiguration().getSchema(), tableName); + } + + @Override + protected void addCustomPlatformTypeMapping(final Platform platform) { + platform.getPlatformInfo().addNativeTypeMapping(Types.NCHAR, "NVARCHAR", Types.NVARCHAR); + platform.getPlatformInfo().addNativeTypeMapping(Types.CHAR, "VARCHAR", Types.VARCHAR); + platform.getPlatformInfo().addNativeTypeMapping(Types.DOUBLE, "DECIMAL", Types.DECIMAL); + // platform.getPlatformInfo().addNativeTypeMapping(-1, "NCLOB", Types.NCLOB); + } + @Override + public DataBaseProvider getDatabaseProvider() { + return DataBaseProvider.HANA; + } + + @Override + protected Platform createPlatform(DatabaseSettings databaseSettings, DataSource dataSource) { + HybrisPlatform instance = MigrationHybrisHANAPlatform.build(databaseSettings); + instance.setDataSource(dataSource); + return instance; + } +} + diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/HsqlRepository.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/HsqlRepository.java new file mode 100644 index 0000000..3b8708a --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/HsqlRepository.java @@ -0,0 +1,56 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.impl; + +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationDataTypeMapperService; +import de.hybris.bootstrap.ddl.DataBaseProvider; +import com.sap.cx.boosters.commercedbsync.MarkersQueryDefinition; +import com.sap.cx.boosters.commercedbsync.OffsetQueryDefinition; +import com.sap.cx.boosters.commercedbsync.SeekQueryDefinition; + +public class HsqlRepository extends AbstractDataRepository { + + public HsqlRepository(DataSourceConfiguration dataSourceConfiguration, DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService) { + super(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } + + @Override + protected String buildOffsetBatchQuery(OffsetQueryDefinition queryDefinition, String... conditions) { + throw new UnsupportedOperationException("not implemented"); + } + + @Override + protected String buildValueBatchQuery(SeekQueryDefinition queryDefinition, String... conditions) { + throw new UnsupportedOperationException("not implemented"); + } + + @Override + protected String buildBatchMarkersQuery(MarkersQueryDefinition queryDefinition, String... conditions) { + throw new UnsupportedOperationException("not implemented"); + } + + @Override + protected String createAllTableNamesQuery() { + throw new UnsupportedOperationException(); + } + + @Override + protected String createAllColumnNamesQuery(String table) { + throw new UnsupportedOperationException(); + } + + @Override + protected String createUniqueColumnsQuery(String tableName) { + throw new UnsupportedOperationException("not implemented"); + } + + @Override + public DataBaseProvider getDatabaseProvider() { + return DataBaseProvider.HSQL; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/MySQLDataRepository.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/MySQLDataRepository.java new file mode 100644 index 0000000..fbc019f --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/MySQLDataRepository.java @@ -0,0 +1,80 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.impl; + +import com.google.common.base.Joiner; +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationDataTypeMapperService; +import de.hybris.bootstrap.ddl.DataBaseProvider; +import com.sap.cx.boosters.commercedbsync.MarkersQueryDefinition; +import com.sap.cx.boosters.commercedbsync.OffsetQueryDefinition; +import com.sap.cx.boosters.commercedbsync.SeekQueryDefinition; + +public class MySQLDataRepository extends AbstractDataRepository { + public MySQLDataRepository(DataSourceConfiguration dataSourceConfiguration, DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService) { + super(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } + + @Override + protected String buildOffsetBatchQuery(OffsetQueryDefinition queryDefinition, String... conditions) { + String orderBy = Joiner.on(',').join(queryDefinition.getAllColumns()); + return String.format("select * from %s where %s order by %s limit %s,%s", queryDefinition.getTable(), expandConditions(conditions), orderBy, queryDefinition.getOffset(), queryDefinition.getBatchSize()); + } + + @Override + protected String buildValueBatchQuery(SeekQueryDefinition queryDefinition, String... conditions) { + return String.format("select * from %s where %s order by %s limit %s", queryDefinition.getTable(), expandConditions(conditions), queryDefinition.getColumn(), queryDefinition.getBatchSize()); + } + + @Override + protected String buildBatchMarkersQuery(MarkersQueryDefinition queryDefinition, String... conditions) { + String column = queryDefinition.getColumn(); + return String.format("SELECT %s,rownum\n" + + "FROM ( \n" + + " SELECT \n" + + " @row := @row +1 AS rownum, %s \n" + + " FROM (SELECT @row :=-1) r, %s WHERE %s ORDER BY %s) ranked \n" + + "WHERE rownum %% %s = 0 ", column, column, queryDefinition.getTable(), expandConditions(conditions), column, queryDefinition.getBatchSize()); + } + + @Override + protected String createAllTableNamesQuery() { + return String.format( + "select TABLE_NAME from information_schema.tables where table_schema = '%s' and TABLE_TYPE = 'BASE TABLE'", + getDataSourceConfiguration().getSchema()); + } + + @Override + protected String createAllColumnNamesQuery(String tableName) { + return String.format( + "SELECT DISTINCT COLUMN_NAME from information_schema.columns where table_schema = '%s' AND TABLE_NAME = '%s'", + getDataSourceConfiguration().getSchema(), tableName); + } + + @Override + protected String createUniqueColumnsQuery(String tableName) { + return String.format( + "SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.STATISTICS t1\n" + + "INNER JOIN \n" + + "(\n" + + "SELECT DISTINCT TABLE_SCHEMA, TABLE_NAME, INDEX_NAME, count(INDEX_NAME) as COL_COUNT \n" + + "FROM INFORMATION_SCHEMA.STATISTICS \n" + + "WHERE TABLE_SCHEMA = '%s' AND TABLE_NAME = '%s' AND NON_UNIQUE = 0\n" + + "GROUP BY TABLE_SCHEMA, TABLE_NAME, INDEX_NAME\n" + + "ORDER BY COL_COUNT ASC\n" + + "LIMIT 1\n" + + ") t2\n" + + "ON t1.TABLE_SCHEMA = t2.TABLE_SCHEMA AND t1.TABLE_NAME = t2.TABLE_NAME AND t1.INDEX_NAME = t2.INDEX_NAME\n" + + ";\n", + getDataSourceConfiguration().getSchema(), tableName); + } + + @Override + public DataBaseProvider getDatabaseProvider() { + return DataBaseProvider.MYSQL; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/MySQLIncrementalDataRepository.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/MySQLIncrementalDataRepository.java new file mode 100644 index 0000000..bc2582b --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/MySQLIncrementalDataRepository.java @@ -0,0 +1,206 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.impl; + +import com.google.common.base.Joiner; +import com.sap.cx.boosters.commercedbsync.dataset.DataSet; +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationDataTypeMapperService; +import de.hybris.platform.util.Config; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Timestamp; +import java.time.Instant; +import java.util.ArrayList; +import java.util.List; +import com.sap.cx.boosters.commercedbsync.MarkersQueryDefinition; +import com.sap.cx.boosters.commercedbsync.OffsetQueryDefinition; +import com.sap.cx.boosters.commercedbsync.SeekQueryDefinition; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class MySQLIncrementalDataRepository extends MySQLDataRepository{ + + private static final Logger LOG = LoggerFactory.getLogger(MySQLIncrementalDataRepository.class); + + private static String deletionTable = Config.getParameter("db.tableprefix") == null ? "" : Config.getParameter("db.tableprefix")+ "itemdeletionmarkers"; + + public MySQLIncrementalDataRepository( + DataSourceConfiguration dataSourceConfiguration, + DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService) { + super(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } + @Override + protected String buildOffsetBatchQuery(OffsetQueryDefinition queryDefinition, String... conditions) { + + if(!queryDefinition.isDeletionEnabled()) { + return super.buildOffsetBatchQuery(queryDefinition,conditions); + } + String orderBy = Joiner.on(',').join(queryDefinition.getAllColumns()); + return String.format("select * from %s where %s order by %s limit %s,%s", deletionTable, expandConditions(conditions), orderBy, queryDefinition.getOffset(), queryDefinition.getBatchSize()); + } + + @Override + protected String buildValueBatchQuery(SeekQueryDefinition queryDefinition, String... conditions) { + if(!queryDefinition.isDeletionEnabled()) { + return super.buildValueBatchQuery(queryDefinition,conditions); + } + return String.format("select * from %s where %s order by %s limit %s", deletionTable, expandConditions(conditions), queryDefinition.getColumn(), queryDefinition.getBatchSize()); + } + + @Override + protected String buildBatchMarkersQuery(MarkersQueryDefinition queryDefinition, String... conditions) { + if(!queryDefinition.isDeletionEnabled()) { + return super.buildBatchMarkersQuery(queryDefinition,conditions); + } + String column = queryDefinition.getColumn(); + return String.format("SELECT %s,rownum\n" + + "FROM ( \n" + + " SELECT \n" + + " @row := @row +1 AS rownum, %s \n" + + " FROM (SELECT @row :=-1) r, %s WHERE %s ORDER BY %s) ranked \n" + + "WHERE rownum %% %s = 0 ", column, column, deletionTable, expandConditions(conditions), column, queryDefinition.getBatchSize()); + } + + + + + @Override + public DataSet getBatchOrderedByColumn(SeekQueryDefinition queryDefinition, Instant time) throws Exception { + // + if(!queryDefinition.isDeletionEnabled()) { + return super.getBatchOrderedByColumn(queryDefinition,time); + } + + //get batches with modifiedts >= configured time for incremental migration + List conditionsList = new ArrayList<>(3); + processDefaultConditions(queryDefinition.getTable(), conditionsList); + if (time != null) { + conditionsList.add("modifiedts > ?"); + } + conditionsList.add("p_table = ?"); + if (queryDefinition.getLastColumnValue() != null) { + conditionsList.add(String.format("%s >= %s", queryDefinition.getColumn(), queryDefinition.getLastColumnValue())); + } + if (queryDefinition.getNextColumnValue() != null) { + conditionsList.add(String.format("%s < %s", queryDefinition.getColumn(), queryDefinition.getNextColumnValue())); + } + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection(); + PreparedStatement stmt = connection.prepareStatement(buildValueBatchQuery(queryDefinition, conditions))) { + stmt.setFetchSize(Long.valueOf(queryDefinition.getBatchSize()).intValue()); + if (time != null) { + stmt.setTimestamp(1, Timestamp.from(time)); + } + // setting table for the deletions + stmt.setString(2,queryDefinition.getTable()); + + ResultSet resultSet = stmt.executeQuery(); + return convertToBatchDataSet(resultSet); + } + } + + @Override + public DataSet getBatchWithoutIdentifier(OffsetQueryDefinition queryDefinition, Instant time) throws Exception { + + if(!queryDefinition.isDeletionEnabled()) { + return super.getBatchWithoutIdentifier(queryDefinition,time); + } + //get batches with modifiedts >= configured time for incremental migration + List conditionsList = new ArrayList<>(2); + processDefaultConditions(queryDefinition.getTable(), conditionsList); + if (time != null) { + conditionsList.add("modifiedts > ?"); + } + conditionsList.add("p_table = ?"); + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection(); + PreparedStatement stmt = connection.prepareStatement(buildOffsetBatchQuery(queryDefinition, conditions))) { + stmt.setFetchSize(Long.valueOf(queryDefinition.getBatchSize()).intValue()); + if (time != null) { + stmt.setTimestamp(1, Timestamp.from(time)); + } + // setting table for the deletions + stmt.setString(2,queryDefinition.getTable()); + ResultSet resultSet = stmt.executeQuery(); + return convertToBatchDataSet(resultSet); + } + + + } + + @Override + public DataSet getBatchMarkersOrderedByColumn(MarkersQueryDefinition queryDefinition, Instant time) throws Exception { + + if(!queryDefinition.isDeletionEnabled()) { + return super.getBatchMarkersOrderedByColumn(queryDefinition,time); + } + //get batches with modifiedts >= configured time for incremental migration + List conditionsList = new ArrayList<>(2); + processDefaultConditions(queryDefinition.getTable(), conditionsList); + if (time != null) { + conditionsList.add("modifiedts > ?"); + } + // setting table for the deletions + conditionsList.add("p_table = ?"); + + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection(); + PreparedStatement stmt = connection.prepareStatement(buildBatchMarkersQuery(queryDefinition, conditions))) { + stmt.setFetchSize(Long.valueOf(queryDefinition.getBatchSize()).intValue()); + if (time != null) { + stmt.setTimestamp(1, Timestamp.from(time)); + } + // setting table for the deletions + stmt.setString(2,queryDefinition.getTable()); + + ResultSet resultSet = stmt.executeQuery(); + return convertToBatchDataSet(resultSet); + } + } + + @Override + public long getRowCountModifiedAfter(String table, Instant time,boolean isDeletionEnabled,boolean lpTableMigrationEnabled) throws SQLException { + if(!isDeletionEnabled) { + return super.getRowCountModifiedAfter(table,time,false,false); + } + // + List conditionsList = new ArrayList<>(2); + processDefaultConditions(table, conditionsList); + // setting table for the deletions + conditionsList.add("p_table = ?"); + String[] conditions = null; + if (conditionsList.size() > 0) { + conditions = conditionsList.toArray(new String[conditionsList.size()]); + } + try (Connection connection = getConnection()) { + try (PreparedStatement stmt = connection.prepareStatement(String.format("select count(*) from %s where modifiedts > ? AND %s", deletionTable, expandConditions(conditions)))) { + stmt.setTimestamp(1, Timestamp.from(time)); + // setting table for the deletions + stmt.setString(2,table); + ResultSet resultSet = stmt.executeQuery(); + long value = 0; + if (resultSet.next()) { + value = resultSet.getLong(1); + } + return value; + } + } + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/OracleDataRepository.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/OracleDataRepository.java new file mode 100644 index 0000000..db46d3a --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/OracleDataRepository.java @@ -0,0 +1,232 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.impl; + +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationDataTypeMapperService; +import de.hybris.bootstrap.ddl.DataBaseProvider; +import de.hybris.bootstrap.ddl.DatabaseSettings; +import de.hybris.bootstrap.ddl.HybrisOraclePlatform; +import de.hybris.bootstrap.ddl.HybrisPlatform; + +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Types; +import java.util.Collections; + +import javax.sql.DataSource; + +import org.apache.ddlutils.Platform; +import com.sap.cx.boosters.commercedbsync.MarkersQueryDefinition; +import com.sap.cx.boosters.commercedbsync.OffsetQueryDefinition; +import com.sap.cx.boosters.commercedbsync.SeekQueryDefinition; +import com.sap.cx.boosters.commercedbsync.dataset.DataSet; +import org.springframework.core.io.Resource; +import org.springframework.jdbc.datasource.init.ResourceDatabasePopulator; + +import com.google.common.base.Joiner; + +public class OracleDataRepository extends AbstractDataRepository { + public OracleDataRepository(final DataSourceConfiguration dataSourceConfiguration, + final DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService) { + super(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + ensureJdbcCompliance(); + } + + private void ensureJdbcCompliance() { + // without this types like timestamps may not be jdbc compliant + System.getProperties().setProperty("oracle.jdbc.J2EE13Compliant", "true"); + // ORACLE_TARGET - START + System.getProperties().setProperty("oracle.jdbc.autoCommitSpecCompliant", "false"); + // ORACLE_TARGET - END + } + + @Override + protected DataSet convertToBatchDataSet(final ResultSet resultSet) throws Exception { + return convertToDataSet(resultSet, Collections.singleton("rn")); + } + + @Override + protected String buildOffsetBatchQuery(OffsetQueryDefinition queryDefinition, String... conditions) { + String orderBy = Joiner.on(',').join(queryDefinition.getAllColumns()); + return String.format( + "select * " + + " from ( " + + " select /*+ first_rows(%s) */ " + + " t.*, " + + " row_number() " + + " over (order by %s) rn " + + " from %s t where %s) " + + "where rn between %s and %s " + + "order by rn", queryDefinition.getBatchSize(), orderBy, queryDefinition.getTable(), expandConditions(conditions), queryDefinition.getOffset() + 1, queryDefinition.getOffset() + queryDefinition.getBatchSize()); + } + + // https://blogs.oracle.com/oraclemagazine/on-top-n-and-pagination-queries + // "Pagination in Getting Rows N Through M" + @Override + protected String buildValueBatchQuery(SeekQueryDefinition queryDefinition, String... conditions) { + return String.format( + "select * " + + " from ( " + + " select /*+ first_rows(%s) */ " + + " t.*, " + + " row_number() " + + " over (order by t.%s) rn " + + " from %s t where %s) " + + "where rn <= %s " + + "order by rn", queryDefinition.getBatchSize(), queryDefinition.getColumn(), queryDefinition.getTable(), expandConditions(conditions), queryDefinition.getBatchSize()); + } + + @Override + protected String buildBatchMarkersQuery(MarkersQueryDefinition queryDefinition, String... conditions) { + String column = queryDefinition.getColumn(); + return String.format("SELECT t.%s, t.rownr as \"rownum\" \n" + + "FROM\n" + + "(\n" + + " SELECT %s, (ROW_NUMBER() OVER (ORDER BY %s))-1 AS rownr\n" + + " FROM %s\n WHERE %s" + + ") t\n" + + "WHERE mod(t.rownr,%s) = 0\n" + + "ORDER BY t.%s", column, column, column, queryDefinition.getTable(), expandConditions(conditions), queryDefinition.getBatchSize(), column); + } + + @Override + protected String createAllTableNamesQuery() { + return String.format( + "select distinct TABLE_NAME from ALL_TAB_COLUMNS where lower(OWNER) = lower('%s')", + getDataSourceConfiguration().getSchema()); + } + + @Override + protected String createAllColumnNamesQuery(String table) { + return String.format( + "select distinct COLUMN_NAME from ALL_TAB_COLUMNS where lower(OWNER) = lower('%s') AND lower(TABLE_NAME) = lower('%s')", + getDataSourceConfiguration().getSchema(), table); + } + + @Override + protected String createUniqueColumnsQuery(String tableName) { + return String.format("SELECT t2.\"COLUMN_NAME\"\n" + + "FROM\n" + + "(\n" + + " SELECT * FROM (\n" + + " SELECT i.\"OWNER\", i.\"TABLE_NAME\", i.\"INDEX_NAME\", count(*) as \"COL_COUNT\"\n" + + " FROM ALL_INDEXES i\n" + + " INNER JOIN ALL_IND_COLUMNS c\n" + + " ON i.\"INDEX_NAME\" = c.\"INDEX_NAME\" AND i.\"OWNER\" = c.\"INDEX_OWNER\" AND i.\"TABLE_NAME\" = c.\"TABLE_NAME\"\n" + + " WHERE \n" + + " lower(i.\"OWNER\") = lower('%s')\n" + + " AND\n" + + " lower(i.\"TABLE_NAME\") = lower('%s')\n" + + " AND\n" + + " lower(i.\"UNIQUENESS\") = lower('UNIQUE')\n" + + " GROUP BY i.\"OWNER\", i.\"TABLE_NAME\", i.\"INDEX_NAME\"\n" + + " ORDER BY COL_COUNT ASC \n" + + " )\n" + + " WHERE ROWNUM = 1\n" + + ") t1\n" + + "INNER JOIN ALL_IND_COLUMNS t2\n" + + "ON t1.\"INDEX_NAME\" = t2.\"INDEX_NAME\" AND t1.\"OWNER\" = t2.\"INDEX_OWNER\" AND t1.\"TABLE_NAME\" = t2.\"TABLE_NAME\"", getDataSourceConfiguration().getSchema(), tableName); + } + + @Override + protected Platform createPlatform(final DatabaseSettings databaseSettings, final DataSource dataSource) { + final HybrisPlatform platform = HybrisOraclePlatform.build(databaseSettings); + /* + * ORACLE_TARGET -> if the JdbcModelReader.readTables() is invoked with + * a null schemaPattern, protected Collection readTables(String catalog, + * String schemaPattern, String[] tableTypes) throws SQLException { + * ..then in Oracle it retrieves ALL the tables ..include SYS. This + * causes other issues such as Unsupported JDBC Type Exception, + * therefore always set the schema pattern to the target Oracle's + * schema. + */ + platform.getModelReader().setDefaultSchemaPattern(getDataSourceConfiguration().getSchema()); + platform.setDataSource(dataSource); + return platform; + } + + // ORACLE_TARGET, the separator needs to be in place for the PL/SQL style + // blocks to run, else you get an EOF exception with ; + @Override + public void runSqlScript(final Resource resource) { + final ResourceDatabasePopulator databasePopulator = new ResourceDatabasePopulator(resource); + databasePopulator.setIgnoreFailedDrops(true); + databasePopulator.setSeparator("/"); + databasePopulator.execute(getDataSource()); + + } + + @Override + public float getDatabaseUtilization() throws SQLException { + return (float) 1.00; + } + + /* + * @Override protected void addCustomPlatformTypeMapping(Platform platform) + * { //System.out.println("$$SETTING ORACLE TYPE "); + * + * platform.getPlatformInfo().addNativeTypeMapping(2009, "SQLXML"); try { + * platform.getPlatformInfo().addNativeTypeMapping(981939, "TEST"); } catch + * (Exception e) { throw e; + * + * } + * + * } + */ + + @Override + protected void addCustomPlatformTypeMapping(final Platform platform) { + // platform.getPlatformInfo().addNativeTypeMapping(Types.NCLOB, + // "NVARCHAR(MAX)"); + // platform.getPlatformInfo().addNativeTypeMapping(Types.CLOB, + // "NVARCHAR(MAX)"); + // platform.getPlatformInfo().addNativeTypeMapping(Types.NVARCHAR, + // "VARCHAR2"); + + platform.getPlatformInfo().addNativeTypeMapping(Types.NVARCHAR, "VARCHAR2"); + platform.getPlatformInfo().setHasSize(Types.NVARCHAR, true); + platform.getPlatformInfo().addNativeTypeMapping(Types.VARBINARY, "BLOB"); + platform.getPlatformInfo().setHasSize(Types.VARBINARY, false); + + platform.getPlatformInfo().addNativeTypeMapping(Types.REAL, "NUMBER(30,8)"); + platform.getPlatformInfo().setHasPrecisionAndScale(Types.REAL, false); + + platform.getPlatformInfo().addNativeTypeMapping(Types.DOUBLE, "NUMBER(30,8)"); + platform.getPlatformInfo().setHasPrecisionAndScale(Types.DOUBLE, false); + platform.getPlatformInfo().setHasSize(Types.DOUBLE, false); + + platform.getPlatformInfo().addNativeTypeMapping(Types.BIGINT, "NUMBER(20,0)"); + platform.getPlatformInfo().setHasSize(Types.BIGINT, false); + platform.getPlatformInfo().setHasPrecisionAndScale(Types.BIGINT, false); + + platform.getPlatformInfo().addNativeTypeMapping(Types.INTEGER, "NUMBER(20,0)"); + platform.getPlatformInfo().setHasSize(Types.INTEGER, false); + platform.getPlatformInfo().setHasPrecisionAndScale(Types.INTEGER, false); + + platform.getPlatformInfo().addNativeTypeMapping(Types.TINYINT, "NUMBER(1,0)"); + platform.getPlatformInfo().setHasSize(Types.TINYINT, false); + platform.getPlatformInfo().setHasPrecisionAndScale(Types.TINYINT, false); + + platform.getPlatformInfo().addNativeTypeMapping(Types.CHAR, "NUMBER(10,0)"); + platform.getPlatformInfo().setHasSize(Types.CHAR, false); + platform.getPlatformInfo().setHasPrecisionAndScale(Types.CHAR, false); + // platform.getPlatformInfo().setHasNullDefault(Types.CHAR, true); + + // platform.getPlatformInfo().addNativeTypeMapping(Types.REAL, "float"); + // platform.getPlatformInfo().addNativeTypeMapping(Types.LONGVARBINARY, + // "VARBINARY(MAX)"); + + // platform.getPlatformInfo().setHasPrecisionAndScale(Types.REAL, + // false); + } + + @Override + public DataBaseProvider getDatabaseProvider() { + return DataBaseProvider.ORACLE; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/PostGresDataRepository.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/PostGresDataRepository.java new file mode 100644 index 0000000..257b5d4 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/impl/PostGresDataRepository.java @@ -0,0 +1,110 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.impl; + +import com.google.common.base.Joiner; +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationDataTypeMapperService; +import de.hybris.bootstrap.ddl.DataBaseProvider; +import de.hybris.bootstrap.ddl.DatabaseSettings; +import de.hybris.bootstrap.ddl.HybrisPlatform; +import org.apache.ddlutils.Platform; +import com.sap.cx.boosters.commercedbsync.MarkersQueryDefinition; +import com.sap.cx.boosters.commercedbsync.OffsetQueryDefinition; +import com.sap.cx.boosters.commercedbsync.SeekQueryDefinition; +import com.sap.cx.boosters.commercedbsync.repository.platform.MigrationHybrisPostGresPlatform; +import org.springframework.core.io.Resource; +import org.springframework.jdbc.datasource.init.ResourceDatabasePopulator; + +import javax.sql.DataSource; + +public class PostGresDataRepository extends AbstractDataRepository { + public PostGresDataRepository(DataSourceConfiguration dataSourceConfiguration, DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService) { + super(dataSourceConfiguration, databaseMigrationDataTypeMapperService); + } + + @Override + protected String buildOffsetBatchQuery(OffsetQueryDefinition queryDefinition, String... conditions) { + String orderBy = Joiner.on(',').join(queryDefinition.getAllColumns()); + return String.format("select * from %s where %s order by %s limit %s,%s", queryDefinition.getTable(), expandConditions(conditions), orderBy, queryDefinition.getOffset(), queryDefinition.getBatchSize()); + } + + @Override + public void runSqlScript(Resource resource) { + final ResourceDatabasePopulator databasePopulator = new ResourceDatabasePopulator(resource); + databasePopulator.setIgnoreFailedDrops(true); + databasePopulator.setSeparator("#"); + databasePopulator.execute(getDataSource()); + } + + @Override + protected String buildValueBatchQuery(SeekQueryDefinition queryDefinition, String... conditions) { + return String.format("select * from %s where %s order by %s limit %s", queryDefinition.getTable(), expandConditions(conditions), queryDefinition.getColumn(), queryDefinition.getBatchSize()); + } + + + @Override + protected String buildBatchMarkersQuery(MarkersQueryDefinition queryDefinition, String... conditions) { + String column = queryDefinition.getColumn(); + return String.format("SELECT %s,rownum\n" + + "FROM ( \n" + + " SELECT \n" + + " @row := @row +1 AS rownum, %s \n" + + " FROM (SELECT @row :=-1) r, %s WHERE %s ORDER BY %s) ranked \n" + + "WHERE rownum %% %s = 0 ", column, column, queryDefinition.getTable(), expandConditions(conditions), column, queryDefinition.getBatchSize()); + } + + @Override + protected String createAllTableNamesQuery() { + return String.format( + "select TABLE_NAME from information_schema.tables where table_schema = '%s' and TABLE_TYPE = 'BASE TABLE'", + getDataSourceConfiguration().getSchema()); + } + + @Override + protected String createAllColumnNamesQuery(String tableName) { + return String.format( + "SELECT DISTINCT COLUMN_NAME from information_schema.columns where table_schema = '%s' AND TABLE_NAME = '%s'", + getDataSourceConfiguration().getSchema(), tableName); + } + + @Override + protected String createUniqueColumnsQuery(String tableName) { + return String.format( + "SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.STATISTICS t1\n" + + "INNER JOIN \n" + + "(\n" + + "SELECT DISTINCT TABLE_SCHEMA, TABLE_NAME, INDEX_NAME, count(INDEX_NAME) as COL_COUNT \n" + + "FROM INFORMATION_SCHEMA.STATISTICS \n" + + "WHERE TABLE_SCHEMA = '%s' AND TABLE_NAME = '%s' AND NON_UNIQUE = 0\n" + + "GROUP BY TABLE_SCHEMA, TABLE_NAME, INDEX_NAME\n" + + "ORDER BY COL_COUNT ASC\n" + + "LIMIT 1\n" + + ") t2\n" + + "ON t1.TABLE_SCHEMA = t2.TABLE_SCHEMA AND t1.TABLE_NAME = t2.TABLE_NAME AND t1.INDEX_NAME = t2.INDEX_NAME\n" + + ";\n", + getDataSourceConfiguration().getSchema(), tableName); + } + + @Override + protected void addCustomPlatformTypeMapping(final Platform platform) { + + // DO nothing + } + + @Override + public DataBaseProvider getDatabaseProvider() { + return DataBaseProvider.POSTGRESQL; + } + + @Override + protected Platform createPlatform(DatabaseSettings databaseSettings, DataSource dataSource) { + HybrisPlatform instance = MigrationHybrisPostGresPlatform.build(databaseSettings); + instance.setDataSource(dataSource); + return instance; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisHANABuilder.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisHANABuilder.java new file mode 100644 index 0000000..d871175 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisHANABuilder.java @@ -0,0 +1,78 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.platform; + +import de.hybris.bootstrap.ddl.DatabaseSettings; +import de.hybris.bootstrap.ddl.sql.ColumnNativeTypeDecorator; +import de.hybris.bootstrap.ddl.sql.HanaSqlBuilder; +import de.hybris.bootstrap.ddl.sql.HybrisMSSqlBuilder; +import org.apache.ddlutils.Platform; +import org.apache.ddlutils.model.Column; + +import java.sql.Types; + +public class MigrationHybrisHANABuilder extends HanaSqlBuilder { + + public MigrationHybrisHANABuilder(Platform platform, DatabaseSettings databaseSettings, + final Iterable columnNativeTypeDecorators) { + super(platform, databaseSettings,columnNativeTypeDecorators); + } + + @Override + protected String getSqlType(Column column) { + /* + core-advanced-deployment.xml:661 + TODO implement more generic mapper for special attrs + */ + final String nativeType = this.getNativeType(column); + + final int sizePos = nativeType.indexOf(SIZE_PLACEHOLDER); + final StringBuilder sqlType = new StringBuilder(); + + if((column.getTypeCode() == Types.NVARCHAR) && Integer.parseInt(column.getSize()) > 5000){ + return sqlType.append("NCLOB").toString(); + } + + sqlType.append(sizePos >= 0 ? nativeType.substring(0, sizePos) : nativeType); + + Object sizeSpec = column.getSize(); + if (sizeSpec == null) { + sizeSpec = this.getPlatformInfo().getDefaultSize(column.getTypeCode()); + } + + if (sizeSpec != null) + { + if (this.getPlatformInfo().hasSize(column.getTypeCode())) { + sqlType.append("("); + sqlType.append(detectSize(column)); + sqlType.append(")"); + } else if (this.getPlatformInfo().hasPrecisionAndScale(column.getTypeCode())) { + sqlType.append("("); + sqlType.append(column.getSizeAsInt()); + sqlType.append(","); + sqlType.append(column.getScale()); + sqlType.append(")"); + } + } + sqlType.append(sizePos >= 0 ? nativeType.substring(sizePos + "{0}".length()) : ""); + return sqlType.toString(); + } + + //ddlutils cannot handle "complex" sizes ootb, therefore adding support here + private String detectSize(Column column) { + if (this.getPlatformInfo().hasSize(column.getTypeCode())) { + if (column.getTypeCode() == Types.NVARCHAR) { + if (column.getSizeAsInt() > 255 && column.getSizeAsInt() <=5000 ) { + return ""+ 5000; + } + } else if (column.getTypeCode() == Types.DOUBLE) { + return "30,8"; + } + } + return column.getSize(); + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisHANAPlatform.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisHANAPlatform.java new file mode 100644 index 0000000..24197f4 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisHANAPlatform.java @@ -0,0 +1,89 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.platform; + +import com.google.common.collect.ImmutableList; +import de.hybris.bootstrap.ddl.DatabaseSettings; +import de.hybris.bootstrap.ddl.HybrisHanaPlatform; +import de.hybris.bootstrap.ddl.HybrisPlatform; +import de.hybris.bootstrap.ddl.jdbc.PlatformJDBCMappingProvider; +import de.hybris.bootstrap.ddl.sql.ColumnNativeTypeDecorator; +import de.hybris.bootstrap.ddl.sql.HanaBlobColumnNativeTypeDecorator; +import org.apache.ddlutils.PlatformInfo; +import org.apache.ddlutils.model.JdbcTypeCategoryEnum; +import org.apache.ddlutils.model.TypeMap; +import org.apache.ddlutils.platform.SqlBuilder; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Types; + +public class MigrationHybrisHANAPlatform extends HybrisHanaPlatform implements HybrisPlatform { + + private static final Logger LOG = LoggerFactory.getLogger(MigrationHybrisHANAPlatform.class); + + private SqlBuilder sqlBuilder; + + + private MigrationHybrisHANAPlatform(final DatabaseSettings databaseSettings) { + super(databaseSettings); + } + + public static HybrisPlatform build(DatabaseSettings databaseSettings) { + + final MigrationHybrisHANAPlatform instance = new MigrationHybrisHANAPlatform(databaseSettings); + HANAHybrisTypeMap.register(); + instance.provideCustomMapping(); + instance.setSqlBuilder(new MigrationHybrisHANABuilder(instance, databaseSettings, getNativeTypeDecorators(databaseSettings))); + return instance; + } + private void provideCustomMapping() + { + final PlatformInfo platformInfo = getPlatformInfo(); + + platformInfo.setMaxColumnNameLength(PlatformJDBCMappingProvider.MAX_COLUMN_NAME_LENGTH); + + platformInfo.addNativeTypeMapping(PlatformJDBCMappingProvider.HYBRIS_PK, "BIGINT", Types.BIGINT); + platformInfo.addNativeTypeMapping(PlatformJDBCMappingProvider.HYBRIS_LONG_STRING, "NCLOB", Types.NCLOB); + platformInfo.addNativeTypeMapping(PlatformJDBCMappingProvider.HYBRIS_JSON, "NCLOB", Types.LONGVARCHAR); + platformInfo.addNativeTypeMapping(PlatformJDBCMappingProvider.HYBRIS_COMMA_SEPARATED_PKS, "NVARCHAR{0}", Types.NVARCHAR); + // platformInfo.addNativeTypeMapping(2011, "NCLOB"); + + platformInfo.setHasSize(PlatformJDBCMappingProvider.HYBRIS_LONG_STRING, true); + platformInfo.setHasSize(PlatformJDBCMappingProvider.HYBRIS_COMMA_SEPARATED_PKS, true); + + platformInfo.addNativeTypeMapping(Types.BIT, "DECIMAL(1,0)", Types.NUMERIC); + + platformInfo.addNativeTypeMapping(Types.DECIMAL, "DECIMAL", Types.DECIMAL); + platformInfo.setHasSize(Types.FLOAT, true); + platformInfo.setHasSize(Types.DOUBLE, true); + platformInfo.setHasSize(Types.NVARCHAR, true); + } + + @Override + public SqlBuilder getSqlBuilder() { + return this.sqlBuilder; + } + + @Override + protected void setSqlBuilder(SqlBuilder builder) { + this.sqlBuilder = builder; + } + + private static Iterable getNativeTypeDecorators(final DatabaseSettings databaseSettings) + { + return ImmutableList.of(new HanaBlobColumnNativeTypeDecorator(databaseSettings)); + } + static class HANAHybrisTypeMap extends TypeMap { + + static void register() { + registerJdbcType(Types.NCHAR, "NVARCHAR", JdbcTypeCategoryEnum.TEXTUAL); + registerJdbcType(Types.NCLOB, "NCLOB", JdbcTypeCategoryEnum.TEXTUAL); + } + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisMSSqlBuilder.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisMSSqlBuilder.java new file mode 100644 index 0000000..03e0acd --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisMSSqlBuilder.java @@ -0,0 +1,79 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.platform; + +import de.hybris.bootstrap.ddl.DatabaseSettings; +import de.hybris.bootstrap.ddl.sql.HybrisMSSqlBuilder; +import org.apache.ddlutils.Platform; +import org.apache.ddlutils.model.Column; + +import java.sql.Types; + +public class MigrationHybrisMSSqlBuilder extends HybrisMSSqlBuilder { + + public MigrationHybrisMSSqlBuilder(Platform platform, DatabaseSettings databaseSettings) { + super(platform, databaseSettings); + } + + @Override + protected String getSqlType(Column column) { + /* + core-advanced-deployment.xml:661 + TODO implement more generic mapper for special attrs + */ + if (column.getName().equalsIgnoreCase("InheritancePathString")) { + return "varchar(1800)"; + } + String nativeType = this.getNativeType(column); + int sizePos = nativeType.indexOf("{0}"); + StringBuilder sqlType = new StringBuilder(); + sqlType.append(sizePos >= 0 ? nativeType.substring(0, sizePos) : nativeType); + Object sizeSpec = column.getSize(); + if (sizeSpec == null) { + sizeSpec = this.getPlatformInfo().getDefaultSize(column.getTypeCode()); + } + + if (sizeSpec != null) { + if (this.getPlatformInfo().hasSize(column.getTypeCode())) { + sqlType.append("("); + sqlType.append(detectSize(column)); + sqlType.append(")"); + } else if (this.getPlatformInfo().hasPrecisionAndScale(column.getTypeCode())) { + sqlType.append("("); + sqlType.append(column.getSizeAsInt()); + sqlType.append(","); + sqlType.append(column.getScale()); + sqlType.append(")"); + } + } + + sqlType.append(sizePos >= 0 ? nativeType.substring(sizePos + "{0}".length()) : ""); + return sqlType.toString(); + } + + //ddlutils cannot handle "complex" sizes ootb, therefore adding support here + private String detectSize(Column column) { + if (this.getPlatformInfo().hasSize(column.getTypeCode())) { + if (column.getTypeCode() == Types.NVARCHAR) { + if (column.getSizeAsInt() > 4000) { + return "MAX"; + } + } + if (column.getTypeCode() == Types.VARCHAR) { + if (column.getSizeAsInt() > 8000) { + return "MAX"; + } + } + if (column.getTypeCode() == Types.VARBINARY) { + if (column.getSizeAsInt() > 8000) { + return "MAX"; + } + } + } + return column.getSize(); + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisMSSqlPlatform.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisMSSqlPlatform.java new file mode 100644 index 0000000..bc0e138 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisMSSqlPlatform.java @@ -0,0 +1,114 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.platform; + +import de.hybris.bootstrap.ddl.DatabaseSettings; +import de.hybris.bootstrap.ddl.HybrisPlatform; +import de.hybris.bootstrap.ddl.sql.HybrisMSSqlBuilder; +import org.apache.ddlutils.DatabaseOperationException; +import org.apache.ddlutils.Platform; +import org.apache.ddlutils.PlatformInfo; +import org.apache.ddlutils.model.Column; +import org.apache.ddlutils.model.Database; +import org.apache.ddlutils.model.Table; +import org.apache.ddlutils.platform.DatabaseMetaDataWrapper; +import org.apache.ddlutils.platform.mssql.MSSqlModelReader; +import org.apache.ddlutils.platform.mssql.MSSqlPlatform; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Connection; +import java.sql.SQLException; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; + +public class MigrationHybrisMSSqlPlatform extends MSSqlPlatform implements HybrisPlatform { + + private static final Logger LOG = LoggerFactory.getLogger(MigrationHybrisMSSqlPlatform.class); + + + private MigrationHybrisMSSqlPlatform() { + } + + public static HybrisPlatform build(DatabaseSettings databaseSettings) { + MigrationHybrisMSSqlPlatform instance = new MigrationHybrisMSSqlPlatform(); + instance.provideCustomMapping(); + instance.setSqlBuilder(new MigrationHybrisMSSqlBuilder(instance, databaseSettings)); + MigrationHybrisMSSqlPlatform.HybrisMSSqlModelReader reader = new MigrationHybrisMSSqlPlatform.HybrisMSSqlModelReader(instance); + reader.setDefaultTablePattern(databaseSettings.getTablePrefix() + '%'); + instance.setModelReader(reader); + return instance; + } + + public Database readModelFromDatabase(String name) throws DatabaseOperationException { + return this.readModelFromDatabase(name, (String) null, (String) null, (String[]) null); + } + + private void provideCustomMapping() { + PlatformInfo platformInfo = this.getPlatformInfo(); + platformInfo.setMaxColumnNameLength(30); + platformInfo.addNativeTypeMapping(12002, "BIGINT", -5); + platformInfo.addNativeTypeMapping(12000, "NVARCHAR(MAX)", -1); + platformInfo.addNativeTypeMapping(12003, "NVARCHAR(MAX)", -1); + platformInfo.addNativeTypeMapping(12001, "NVARCHAR(MAX)", -1); + platformInfo.addNativeTypeMapping(-5, "BIGINT"); + platformInfo.addNativeTypeMapping(12, "NVARCHAR"); + platformInfo.addNativeTypeMapping(-7, "TINYINT"); + platformInfo.addNativeTypeMapping(4, "INTEGER"); + platformInfo.addNativeTypeMapping(5, "INTEGER"); + platformInfo.addNativeTypeMapping(-6, "TINYINT", -6); + platformInfo.addNativeTypeMapping(8, "FLOAT", 8); + platformInfo.addNativeTypeMapping(6, "FLOAT", 8); + platformInfo.addNativeTypeMapping(-9, "NVARCHAR", -9); + platformInfo.addNativeTypeMapping(92, "DATETIME2", 93); + platformInfo.addNativeTypeMapping(93, "DATETIME2"); + platformInfo.addNativeTypeMapping(2004, "VARBINARY(MAX)"); + } + + public String getTableName(Table table) { + return this.getSqlBuilder().getTableName(table); + } + + public String getColumnName(Column column) { + return ((HybrisMSSqlBuilder) this.getSqlBuilder()).getColumnName(column); + } + + @Override + public void alterTables(Connection connection, Database desiredModel, boolean continueOnError) throws DatabaseOperationException { + String sql = this.getAlterTablesSql(connection, desiredModel); + LOG.info(sql); + this.evaluateBatch(connection, sql, continueOnError); + } + + private static class HybrisMSSqlModelReader extends MSSqlModelReader { + private static final String TABLE_NAME_KEY = "TABLE_NAME"; + private final Set tablesToExclude = new HashSet() { + { + this.add("trace_xe_action_map"); + this.add("trace_xe_event_map"); + } + }; + + public HybrisMSSqlModelReader(Platform platform) { + super(platform); + } + + protected Table readTable(DatabaseMetaDataWrapper metaData, Map values) throws SQLException { + return this.tableShouldBeExcluded(values) ? null : super.readTable(metaData, values); + } + + private boolean tableShouldBeExcluded(Map values) { + String tableName = this.getTableNameFrom(values); + return tableName != null && this.tablesToExclude.contains(tableName.toLowerCase()); + } + + private String getTableNameFrom(Map values) { + return (String) values.get("TABLE_NAME"); + } + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisPostGresBuilder.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisPostGresBuilder.java new file mode 100644 index 0000000..2d018f6 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisPostGresBuilder.java @@ -0,0 +1,90 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.platform; + +import org.apache.commons.lang3.StringUtils; +import org.apache.ddlutils.Platform; +import org.apache.ddlutils.model.Column; +import org.apache.ddlutils.model.TypeMap; +import org.apache.ddlutils.platform.postgresql.PostgreSqlBuilder; +import java.sql.Types; + +public class MigrationHybrisPostGresBuilder extends PostgreSqlBuilder { + + public MigrationHybrisPostGresBuilder(Platform platform) { + super(platform); + } + + @Override + protected String getSqlType(Column column) { + + String nativeType = this.getNativeType(column); + int sizePos = nativeType.indexOf("{0}"); + StringBuilder sqlType = new StringBuilder(); + + if((column.getTypeCode() == Types.NVARCHAR) && Integer.parseInt(column.getSize()) > 5000){ + return sqlType.append("text").toString(); + } + + sqlType.append(sizePos >= 0 ? nativeType.substring(0, sizePos) : nativeType); + Object sizeSpec = column.getSize(); + if (sizeSpec == null) { + sizeSpec = this.getPlatformInfo().getDefaultSize(column.getTypeCode()); + } + + if (sizeSpec != null) { + if (this.getPlatformInfo().hasSize(column.getTypeCode())) { + sqlType.append("("); + sqlType.append(detectSize(column)); + sqlType.append(")"); + } else if (this.getPlatformInfo().hasPrecisionAndScale(column.getTypeCode())) { + sqlType.append("("); + sqlType.append(column.getSizeAsInt()); + sqlType.append(","); + sqlType.append(column.getScale()); + sqlType.append(")"); + } + } + + sqlType.append(sizePos >= 0 ? nativeType.substring(sizePos + "{0}".length()) : ""); + return sqlType.toString(); + } + + //ddlutils cannot handle "complex" sizes ootb, therefore adding support here + private String detectSize(Column column) { + if (this.getPlatformInfo().hasSize(column.getTypeCode())) { + if (column.getTypeCode() == Types.NVARCHAR) { + if (column.getSizeAsInt() > 4000) { + return "MAX"; + } + } + if (column.getTypeCode() == Types.VARCHAR) { + if (column.getSizeAsInt() > 8000) { + return "MAX"; + } + } + if (column.getTypeCode() == Types.VARBINARY) { + if (column.getSizeAsInt() > 8000) { + return "MAX"; + } + } + } + return column.getSize(); + } + + + @Override + public boolean isValidDefaultValue(String defaultSpec, int typeCode) { + return defaultSpec != null && StringUtils.isNumeric(defaultSpec) && (defaultSpec.length() > 0 || !TypeMap.isNumericType(typeCode) && !TypeMap.isDateTimeType(typeCode)); + } + + @Override + public String getColumnName(final Column column) + { + return column.getName(); + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisPostGresPlatform.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisPostGresPlatform.java new file mode 100644 index 0000000..b3ad417 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/repository/platform/MigrationHybrisPostGresPlatform.java @@ -0,0 +1,61 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.repository.platform; + +import de.hybris.bootstrap.ddl.DatabaseSettings; +import de.hybris.bootstrap.ddl.HybrisPlatform; +import org.apache.ddlutils.PlatformInfo; +import org.apache.ddlutils.model.Column; +import org.apache.ddlutils.model.Table; +import org.apache.ddlutils.platform.postgresql.PostgreSqlPlatform; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import java.sql.Types; + +public class MigrationHybrisPostGresPlatform extends PostgreSqlPlatform implements HybrisPlatform { + + private static final Logger LOG = LoggerFactory.getLogger(MigrationHybrisPostGresPlatform.class); + + + private MigrationHybrisPostGresPlatform() { + super(); + } + + public static HybrisPlatform build(DatabaseSettings databaseSettings) { + MigrationHybrisPostGresPlatform instance = new MigrationHybrisPostGresPlatform(); + instance.provideCustomMapping(); + instance.setSqlBuilder(new MigrationHybrisPostGresBuilder(instance)); + return instance; + } + + + private void provideCustomMapping() { + PlatformInfo platformInfo = this.getPlatformInfo(); + platformInfo.setMaxColumnNameLength(31); + platformInfo.addNativeTypeMapping(Types.NVARCHAR, "VARCHAR", Types.VARCHAR); + platformInfo.addNativeTypeMapping(Types.NCHAR, "int2", Types.TINYINT); + platformInfo.addNativeTypeMapping(Types.CHAR, "int2", Types.TINYINT); + platformInfo.setHasSize(Types.CHAR, false); + platformInfo.setHasSize(Types.NCHAR, false); + platformInfo.setHasSize(Types.NVARCHAR, true); + platformInfo.setHasSize(Types.VARCHAR, true); + platformInfo.addNativeTypeMapping(Types.BIGINT, "int8"); + platformInfo.addNativeTypeMapping(Types.INTEGER, "int2"); + platformInfo.addNativeTypeMapping(Types.SMALLINT, "int2"); + platformInfo.addNativeTypeMapping(Types.TINYINT, "int2"); + platformInfo.addNativeTypeMapping(Types.DOUBLE, "float8"); + } + + @Override + public String getTableName(Table table) { + return this.getSqlBuilder().getTableName(table); + } + + public String getColumnName(Column column) { + return ((MigrationHybrisPostGresBuilder) this.getSqlBuilder()).getColumnName(column); + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/scheduler/DatabaseCopyScheduler.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/scheduler/DatabaseCopyScheduler.java new file mode 100644 index 0000000..a4bd880 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/scheduler/DatabaseCopyScheduler.java @@ -0,0 +1,25 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.scheduler; + +import com.sap.cx.boosters.commercedbsync.MigrationStatus; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; + +import java.time.OffsetDateTime; + +/** + * Scheduler for Cluster Migration + */ +public interface DatabaseCopyScheduler { + void schedule(CopyContext context) throws Exception; + + MigrationStatus getCurrentState(CopyContext context, OffsetDateTime since) throws Exception; + + boolean isAborted(CopyContext context) throws Exception; + + void abort(CopyContext context) throws Exception; +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/scheduler/impl/CustomClusterDatabaseCopyScheduler.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/scheduler/impl/CustomClusterDatabaseCopyScheduler.java new file mode 100644 index 0000000..573d4dd --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/scheduler/impl/CustomClusterDatabaseCopyScheduler.java @@ -0,0 +1,350 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.scheduler.impl; + +import com.sap.cx.boosters.commercedbsync.events.CopyCompleteEvent; +import com.sap.cx.boosters.commercedbsync.scheduler.DatabaseCopyScheduler; +import de.hybris.platform.cluster.PingBroadcastHandler; +import de.hybris.platform.core.Registry; +import de.hybris.platform.core.Tenant; +import de.hybris.platform.jalo.JaloSession; +import de.hybris.platform.servicelayer.cluster.ClusterService; +import de.hybris.platform.servicelayer.event.EventService; +import org.apache.commons.collections4.CollectionUtils; +import org.apache.commons.lang3.time.DurationFormatUtils; +import org.apache.commons.lang3.tuple.Pair; +import com.sap.cx.boosters.commercedbsync.MigrationProgress; +import com.sap.cx.boosters.commercedbsync.MigrationStatus; +import com.sap.cx.boosters.commercedbsync.adapter.DataRepositoryAdapter; +import com.sap.cx.boosters.commercedbsync.adapter.impl.ContextualDataRepositoryAdapter; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.events.CopyDatabaseTableEvent; +import com.sap.cx.boosters.commercedbsync.service.DatabaseCopyTask; +import com.sap.cx.boosters.commercedbsync.service.DatabaseCopyTaskRepository; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.slf4j.MDC; +import org.springframework.core.io.ClassPathResource; + +import java.time.Duration; +import java.time.Instant; +import java.time.OffsetDateTime; +import java.time.ZoneOffset; +import java.time.temporal.ChronoUnit; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Set; +import java.util.stream.Collectors; + +import static com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants.MDC_CLUSTERID; +import static com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants.MDC_PIPELINE; + +/** + * Scheduler for Cluster Based Migrations + */ +public class CustomClusterDatabaseCopyScheduler implements DatabaseCopyScheduler { + + private static final Logger LOG = LoggerFactory.getLogger(CustomClusterDatabaseCopyScheduler.class); + + private EventService eventService; + + private ClusterService clusterService; + + private DatabaseCopyTaskRepository databaseCopyTaskRepository; + + /** + * Schedules a Data Copy Task for each table across all the available nodes + * + * @param context + * @throws Exception + */ + @Override + public void schedule(CopyContext context) throws Exception { + String sqlScript = ""; + // ORACLE_TARGET - START + if (context.getMigrationContext().getDataTargetRepository().getDatabaseProvider().isOracleUsed()) { + sqlScript = "/sql/createSchedulerTablesOracle.sql"; + } else if(context.getMigrationContext().getDataTargetRepository().getDatabaseProvider().isHanaUsed()){ + sqlScript = "/sql/createSchedulerTablesHana.sql"; + } else if(context.getMigrationContext().getDataTargetRepository().getDatabaseProvider().isPostgreSqlUsed()){ + sqlScript = "/sql/createSchedulerTablesPostGres.sql"; + } else { + sqlScript = "/sql/createSchedulerTables.sql"; + } + logMigrationContext(context.getMigrationContext()); + // ORACLE_TARGET - END + context.getMigrationContext().getDataTargetRepository().runSqlScript(new ClassPathResource(sqlScript)); + int ownNodeId = clusterService.getClusterId(); + if (!CollectionUtils.isEmpty(context.getCopyItems())) { + databaseCopyTaskRepository.createMigrationStatus(context); + final List nodeIds = getClusterNodes(context); + int nodeIndex = 0; + DataRepositoryAdapter dataRepositoryAdapter = new ContextualDataRepositoryAdapter(context.getMigrationContext().getDataSourceRepository()); + List> itemsToSchedule = generateSchedulerItemList(context, dataRepositoryAdapter); + for (final Pair itemToSchedule : itemsToSchedule) { + CopyContext.DataCopyItem dataCopyItem = itemToSchedule.getLeft(); + final long sourceRowCount = itemToSchedule.getRight(); + if (sourceRowCount > 0) { + if (nodeIndex >= (nodeIds.size())) { + nodeIndex = 0; + } + final int destinationNodeId = nodeIds.get(nodeIndex); + databaseCopyTaskRepository.scheduleTask(context, dataCopyItem, sourceRowCount, destinationNodeId); + nodeIndex++; + } else { + databaseCopyTaskRepository.scheduleTask(context, dataCopyItem, sourceRowCount, ownNodeId); + databaseCopyTaskRepository.markTaskCompleted(context, dataCopyItem, "0"); + } + } + startMonitorThread(context); + final CopyDatabaseTableEvent event = new CopyDatabaseTableEvent(ownNodeId, context.getMigrationId()); + eventService.publishEvent(event); + } + } + + private void logMigrationContext(final MigrationContext context) { + if (context == null) { + return; + } + LOG.info("--------MIGRATION CONTEXT- START----------"); + LOG.info("isAddMissingColumnsToSchemaEnabled=" + context.isAddMissingColumnsToSchemaEnabled()); + LOG.info("isAddMissingTablesToSchemaEnabled=" + context.isAddMissingTablesToSchemaEnabled()); + LOG.info("isAuditTableMigrationEnabled=" + context.isAuditTableMigrationEnabled()); + LOG.info("isBulkCopyEnabled=" + context.isBulkCopyEnabled()); + LOG.info("isClusterMode=" + context.isClusterMode()); + LOG.info("isDeletionEnabled=" + context.isDeletionEnabled()); + LOG.info("isDisableAllIndexesEnabled=" + context.isDisableAllIndexesEnabled()); + LOG.info("isDropAllIndexesEnabled=" + context.isDropAllIndexesEnabled()); + LOG.info("isFailOnErrorEnabled=" + context.isFailOnErrorEnabled()); + LOG.info("isIncrementalModeEnabled=" + context.isIncrementalModeEnabled()); + LOG.info("isMigrationTriggeredByUpdateProcess=" + context.isMigrationTriggeredByUpdateProcess()); + LOG.info("isRemoveMissingColumnsToSchemaEnabled=" + context.isRemoveMissingColumnsToSchemaEnabled()); + LOG.info("isRemoveMissingTablesToSchemaEnabled=" + context.isRemoveMissingTablesToSchemaEnabled()); + LOG.info("isSchemaMigrationAutoTriggerEnabled=" + context.isSchemaMigrationAutoTriggerEnabled()); + LOG.info("isSchemaMigrationEnabled=" + context.isSchemaMigrationEnabled()); + LOG.info("isTruncateEnabled=" + context.isTruncateEnabled()); + LOG.info("getIncludedTables=" + context.getIncludedTables()); + LOG.info("getExcludedTables=" + context.getExcludedTables()); + LOG.info("getIncrementalTables=" + context.getIncrementalTables()); + LOG.info("getTruncateExcludedTables=" + context.getTruncateExcludedTables()); + LOG.info("getCustomTables=" + context.getCustomTables()); + LOG.info("getIncrementalTimestamp=" + context.getIncrementalTimestamp()); + LOG.info( + "Source TS Name=" + context.getDataSourceRepository().getDataSourceConfiguration().getTypeSystemName()); + LOG.info("Source TS Suffix =" + + context.getDataSourceRepository().getDataSourceConfiguration().getTypeSystemSuffix()); + LOG.info( + "Target TS Name=" + context.getDataTargetRepository().getDataSourceConfiguration().getTypeSystemName()); + LOG.info("Target TS Suffix =" + + context.getDataTargetRepository().getDataSourceConfiguration().getTypeSystemSuffix()); + + LOG.info("--------MIGRATION CONTEXT- END----------"); + } + private List> generateSchedulerItemList(CopyContext context, DataRepositoryAdapter dataRepositoryAdapter) throws Exception { + List> pairs = new ArrayList<>(); + for (CopyContext.DataCopyItem copyItem : context.getCopyItems()) { + pairs.add(Pair.of(copyItem, dataRepositoryAdapter.getRowCount(context.getMigrationContext(), copyItem.getSourceItem()))); + } + //we sort the items to make sure big tables are assigned to nodes in a fair way + return pairs.stream().sorted((p1, p2) -> Long.compare(p1.getRight(), p2.getRight())).collect(Collectors.toList()); + } + + /** + * Starts a thread to monitor the migration + * + * @param context + */ + private void startMonitorThread(CopyContext context) { + JaloSession jaloSession = JaloSession.getCurrentSession(); + + Thread monitor = new Thread(new MigrationMonitor(context, jaloSession), "MigrationMonitor"); + monitor.start(); + } + + @Override + public MigrationStatus getCurrentState(CopyContext context, OffsetDateTime since) throws Exception { + Objects.requireNonNull(context); + Objects.requireNonNull(since); + + MigrationStatus status = databaseCopyTaskRepository.getMigrationStatus(context); + if (!since.equals(OffsetDateTime.MAX)) { + Set updated = databaseCopyTaskRepository.getUpdatedTasks(context, since); + List statusUpdates = new ArrayList<>(updated); + statusUpdates.sort(Comparator.comparing(DatabaseCopyTask::getLastUpdate).thenComparing(DatabaseCopyTask::getPipelinename)); + status.setStatusUpdates(statusUpdates); + } + return status; + } + + @Override + public boolean isAborted(CopyContext context) throws Exception { + MigrationStatus current = this.databaseCopyTaskRepository.getMigrationStatus(context); + return MigrationProgress.ABORTED.equals(current.getStatus()); + } + + @Override + public void abort(CopyContext context) throws Exception { + this.databaseCopyTaskRepository.setMigrationStatus(context, MigrationProgress.ABORTED); + stopPerformanceProfiling(context); + } + + private void stopPerformanceProfiling(CopyContext context) { + if (context.getPerformanceProfiler() != null) { + context.getPerformanceProfiler().reset(); + } + } + + private List getClusterNodes(CopyContext context) { + if (!context.getMigrationContext().isClusterMode()) { + return Collections.singletonList(clusterService.getClusterId()); + } + final List nodeIds = new ArrayList<>(); + try { + // Same code as the hac cluster overview page + PingBroadcastHandler pingBroadcastHandler = PingBroadcastHandler.getInstance(); + pingBroadcastHandler.getNodes().forEach(i -> nodeIds.add(i.getNodeID())); + } catch (final Exception e) { + LOG.warn("Using single cluster node because an error was encountered while fetching cluster nodes information: {{}}", e.getMessage(), e); + } + if (CollectionUtils.isEmpty(nodeIds)) { + nodeIds.add(clusterService.getClusterId()); + } + return nodeIds; + } + + public void setClusterService(ClusterService clusterService) { + this.clusterService = clusterService; + } + + public void setDatabaseCopyTaskRepository(DatabaseCopyTaskRepository databaseCopyTaskRepository) { + this.databaseCopyTaskRepository = databaseCopyTaskRepository; + } + + public void setEventService(EventService eventService) { + this.eventService = eventService; + } + + /** + * Thread to monitor the Migration + */ + private class MigrationMonitor implements Runnable { + private final CopyContext context; + private final Map contextMap; + private final Tenant tenant; + private final JaloSession jaloSession; + private OffsetDateTime lastUpdate = OffsetDateTime.of(1970, 1, 1, 0, 0, 0, 0, ZoneOffset.UTC); + + public MigrationMonitor(CopyContext context, JaloSession jaloSession) { + this.context = context; + this.contextMap = MDC.getCopyOfContextMap(); + this.jaloSession = jaloSession; + this.tenant = jaloSession.getTenant(); + + } + + @Override + public void run() { + try { + prepareThread(); + pollState(); + notifyFinished(); + } catch (Exception e) { + LOG.error("Failed getting current state", e); + } finally { + cleanupThread(); + } + } + + /** + * Detects if the migration has stalled + * + * @throws Exception + */ + private void pollState() throws Exception { + MigrationStatus currentState; + do { + currentState = getCurrentState(context, lastUpdate); + lastUpdate = OffsetDateTime.now(ZoneOffset.UTC); + + // setting deletion + if(context.getMigrationContext().isDeletionEnabled()){ + currentState.setDeletionEnabled(true); + } + + logState(currentState); + Duration elapsedTillLastUpdate = Duration.between(currentState.getLastUpdate().toInstant(ZoneOffset.UTC), Instant.now()); + int stalledTimeout = context.getMigrationContext().getStalledTimeout(); + if (elapsedTillLastUpdate.compareTo(Duration.of(stalledTimeout, ChronoUnit.SECONDS)) >= 0) { + LOG.error("Migration stalled!"); + databaseCopyTaskRepository.setMigrationStatus(context, MigrationProgress.STALLED); + } + Thread.sleep(5000); + } while (!currentState.isCompleted()); + } + + /** + * Notifies nodes about termination + */ + private void notifyFinished() { + final CopyCompleteEvent completeEvent = new CopyCompleteEvent(clusterService.getClusterId(), context.getMigrationId()); + eventService.publishEvent(completeEvent); + } + + /** + * Logs the current migration state + * + * @param status + */ + private void logState(MigrationStatus status) { + for (final DatabaseCopyTask copyTask : status.getStatusUpdates()) { + try (MDC.MDCCloseable ignore = MDC.putCloseable(MDC_PIPELINE, copyTask.getPipelinename()); + MDC.MDCCloseable ignore2 = MDC.putCloseable(MDC_CLUSTERID, String.valueOf(copyTask.getTargetnodeId()))) { + if (copyTask.isFailure()) { + LOG.error("{}/{} processed. FAILED in {{}}. Cause: {{}} Last Update: {{}}", copyTask.getTargetrowcount(), copyTask.getSourcerowcount(), copyTask.getDuration(), copyTask.getError(), copyTask.getLastUpdate()); + } else if (copyTask.isCompleted()) { + LOG.info("{}/{} processed. Completed in {{}}. Last Update: {{}}", copyTask.getTargetrowcount(), copyTask.getSourcerowcount(), copyTask.getDuration(), copyTask.getLastUpdate()); + } else { + LOG.debug("{}/{} processed. Last Update: {{}}", copyTask.getTargetrowcount(), copyTask.getSourcerowcount(), copyTask.getLastUpdate()); + } + } + } + LOG.info("{}/{} tables migrated. {} failed. State: {}", status.getCompletedTasks(), status.getTotalTasks(), status.getFailedTasks(), status.getStatus()); + if (status.isCompleted()) { + String endState = "finished"; + if (status.isFailed()) { + endState = "FAILED"; + } + LOG.info("Migration {} ({}) in {}", endState, status.getStatus(), DurationFormatUtils.formatDurationHMS(Duration.between(status.getStart(), status.getEnd()).toMillis())); + } + } + + + protected void prepareThread() { + MDC.setContextMap(contextMap); + + // tenant + Registry.setCurrentTenant(tenant); + // jalo session + this.jaloSession.activate(); + } + + protected void cleanupThread() { + MDC.clear(); + + // jalo session + JaloSession.deactivate(); + // tenant + Registry.unsetCurrentTenant(); + } + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseCopyTaskRepository.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseCopyTaskRepository.java new file mode 100644 index 0000000..6e64b05 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseCopyTaskRepository.java @@ -0,0 +1,130 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.service; + +import com.sap.cx.boosters.commercedbsync.MigrationProgress; +import com.sap.cx.boosters.commercedbsync.MigrationStatus; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.context.CopyContext.DataCopyItem; +import com.sap.cx.boosters.commercedbsync.service.DatabaseCopyTask; + +import java.time.OffsetDateTime; +import java.util.Set; + +/** + * Repository to manage Migration Status and Tasks + */ +public interface DatabaseCopyTaskRepository { + + /** + * Creates a new DB Migration status record + * + * @param context + * @throws Exception + */ + void createMigrationStatus(CopyContext context) throws Exception; + + /** + * Updates the Migration status record + * + * @param context + * @param progress + * @throws Exception + */ + void setMigrationStatus(CopyContext context, MigrationProgress progress) throws Exception; + + + /** + * Updates the Migration status record from one status to another + * + * @param context + * @param from + * @param to + * @throws Exception + */ + void setMigrationStatus(CopyContext context, MigrationProgress from, MigrationProgress to) throws Exception; + + /** + * Retrieves the current migration status + * + * @param context + * @return + * @throws Exception + */ + MigrationStatus getMigrationStatus(CopyContext context) throws Exception; + + /** + * Schedules a copy Task + * + * @param context the migration context + * @param copyItem the item to copy + * @param sourceRowCount + * @param targetNode the nodeId to perform the copy + * @throws Exception + */ + void scheduleTask(CopyContext context, CopyContext.DataCopyItem copyItem, long sourceRowCount, int targetNode) throws Exception; + + /** + * Retrieves all pending tasks + * + * @param context + * @return + * @throws Exception + */ + Set findPendingTasks(CopyContext context) throws Exception; + + /** + * Updates progress on a Task + * + * @param context + * @param copyItem + * @param itemCount + * @throws Exception + */ + void updateTaskProgress(CopyContext context, CopyContext.DataCopyItem copyItem, long itemCount) throws Exception; + + /** + * Marks the Task as Completed + * + * @param context + * @param copyItem + * @param duration + * @throws Exception + */ + void markTaskCompleted(CopyContext context, CopyContext.DataCopyItem copyItem, String duration) throws Exception; + + /** + * Marks the Task as Failed + * + * @param context + * @param copyItem + * @param error + * @throws Exception + */ + void markTaskFailed(CopyContext context, CopyContext.DataCopyItem copyItem, Exception error) throws Exception; + + /** + * Gets all updated Tasks + * + * @param context + * @param since offset + * @return + * @throws Exception + */ + Set getUpdatedTasks(CopyContext context, OffsetDateTime since) throws Exception; + + Set getAllTasks(CopyContext context) throws Exception; + /** + * ORACLE_TARGET -- added duration ins econds Marks the Task as Completed + * + * @param context + * @param copyItem + * @param duration + * @throws Exception + */ +void markTaskCompleted(CopyContext context, DataCopyItem copyItem, String duration, float durationseconds) + throws Exception; +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationCopyService.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationCopyService.java new file mode 100644 index 0000000..1694a53 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationCopyService.java @@ -0,0 +1,18 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.service; + +import com.sap.cx.boosters.commercedbsync.context.CopyContext; + + +/** + * Actual Service to perform the Migration + */ +public interface DatabaseMigrationCopyService { + + void copyAllAsync(CopyContext context); + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationDataTypeMapperService.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationDataTypeMapperService.java new file mode 100644 index 0000000..a1205b6 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationDataTypeMapperService.java @@ -0,0 +1,21 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.service; + +import java.io.IOException; +import java.sql.SQLException; + + +/** + * Service to deal with Mapping different types between Databases + */ +public interface DatabaseMigrationDataTypeMapperService { + + /** + * Converts BLOB, CLOB and NCLOB Data + */ + Object dataTypeMapper(final Object sourceColumnValue, final int jdbcType) throws IOException, SQLException; +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationReportService.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationReportService.java new file mode 100644 index 0000000..b17a958 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationReportService.java @@ -0,0 +1,15 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.service; + +import com.sap.cx.boosters.commercedbsync.MigrationReport; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; + +public interface DatabaseMigrationReportService { + + MigrationReport getMigrationReport(CopyContext copyContext) throws Exception; + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationReportStorageService.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationReportStorageService.java new file mode 100644 index 0000000..677b3fa --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationReportStorageService.java @@ -0,0 +1,15 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.service; + +import java.io.InputStream; + +public interface DatabaseMigrationReportStorageService { + void store(String fileName, InputStream inputStream) throws Exception; + + boolean validateConnection(); +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationService.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationService.java new file mode 100644 index 0000000..dd5de45 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationService.java @@ -0,0 +1,67 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.service; + +import com.sap.cx.boosters.commercedbsync.MigrationReport; +import com.sap.cx.boosters.commercedbsync.MigrationStatus; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; + +import java.time.OffsetDateTime; + +public interface DatabaseMigrationService { + + /** + * Asynchronously start a new database migration + * + * @param context Migration configuration + * @return migrationID of the started migration + * @throws Exception if anything goes wrong during start + */ + String startMigration(MigrationContext context) throws Exception; + + /** + * Stops the the database migration process. + * The process is stopped on all nodes, in case clustering is used. + * + * @param context Migration configuration + * @param migrationID ID of the migration process that should be stopped + * @throws Exception if anything goes wrong + */ + void stopMigration(MigrationContext context, String migrationID) throws Exception; + + /** + * Get current overall state without details + * + * @param context + * @param migrationID + * @return + * @throws Exception + */ + MigrationStatus getMigrationState(MigrationContext context, String migrationID) throws Exception; + + /** + * Get current state with details per copy task + * + * @param context + * @param migrationID + * @param since Get all updates since this timestamp. Must be in UTC! + * @return + * @throws Exception + */ + MigrationStatus getMigrationState(MigrationContext context, String migrationID, OffsetDateTime since) throws Exception; + + MigrationReport getMigrationReport(MigrationContext context, String migrationID) throws Exception; + + /** + * Busy wait until migration is done. Use only for tests! + * + * @param context + * @param migrationID + * @return + * @throws Exception when migration was not successful + */ + MigrationStatus waitForFinish(MigrationContext context, String migrationID) throws Exception; +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationSynonymService.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationSynonymService.java new file mode 100644 index 0000000..50f97a2 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseMigrationSynonymService.java @@ -0,0 +1,23 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.service; + +import com.sap.cx.boosters.commercedbsync.repository.DataRepository; + +public interface DatabaseMigrationSynonymService { + + /** + * CCv2 Workaround: ccv2 builder does not support prefixes yet. + * creating synonym on ydeployments -> prefix_yeployments + * creating synonym on attributedescriptors -> prefix_attributedescriptors. + * + * @param repository + * @param prefix + * @throws Exception + */ + void recreateSynonyms(DataRepository repository, String prefix) throws Exception; +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseSchemaDifferenceService.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseSchemaDifferenceService.java new file mode 100644 index 0000000..daecf92 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/DatabaseSchemaDifferenceService.java @@ -0,0 +1,30 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.service; + +import com.sap.cx.boosters.commercedbsync.service.impl.DefaultDatabaseSchemaDifferenceService; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; + +/** + * Calculates and applies Schema Differences between two Databases + */ +public interface DatabaseSchemaDifferenceService { + + String generateSchemaDifferencesSql(MigrationContext context) throws Exception; + + void executeSchemaDifferencesSql(MigrationContext context, String sql) throws Exception; + + void executeSchemaDifferences(MigrationContext context) throws Exception; + + /** + * Calculates the differences between two schemas + * + * @param migrationContext + * @return + */ + DefaultDatabaseSchemaDifferenceService.SchemaDifferenceResult getDifference(MigrationContext migrationContext) throws Exception; +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/BlobDatabaseMigrationReportStorageService.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/BlobDatabaseMigrationReportStorageService.java new file mode 100644 index 0000000..ed59da6 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/BlobDatabaseMigrationReportStorageService.java @@ -0,0 +1,114 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.service.impl; + +import com.microsoft.azure.storage.CloudStorageAccount; +import com.microsoft.azure.storage.NameValidator; +import com.microsoft.azure.storage.blob.CloudBlob; +import com.microsoft.azure.storage.blob.CloudBlobClient; +import com.microsoft.azure.storage.blob.CloudBlobContainer; +import com.microsoft.azure.storage.blob.CloudBlockBlob; +import com.microsoft.azure.storage.blob.ListBlobItem; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationReportStorageService; +import org.apache.commons.io.IOUtils; +import org.apache.commons.lang.StringUtils; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.ByteArrayInputStream; +import java.io.InputStream; +import java.util.ArrayList; +import java.util.List; + +public class BlobDatabaseMigrationReportStorageService implements DatabaseMigrationReportStorageService { + + private static final Logger LOG = LoggerFactory.getLogger(BlobDatabaseMigrationReportStorageService.class.getName()); + + private static final String ROOT_CONTAINER = "migration"; + + private CloudBlobClient cloudBlobClient; + + private MigrationContext migrationContext; + + protected void init() throws Exception { + CloudStorageAccount account = CloudStorageAccount.parse(migrationContext.getMigrationReportConnectionString()); + this.cloudBlobClient = account.createCloudBlobClient(); + } + + @Override + public void store(String fileName, InputStream inputStream) throws Exception { + String path = fileName; + if (inputStream != null) { + CloudBlockBlob blob = getContainer(ROOT_CONTAINER, true).getBlockBlobReference(path); + byte[] bytes = IOUtils.toByteArray(inputStream); + ByteArrayInputStream bis = new ByteArrayInputStream(bytes); + blob.upload(bis, bytes.length); + bis.close(); + LOG.info("File {} written to blob storage at {}/{}", path, ROOT_CONTAINER, path); + } else { + throw new IllegalArgumentException(String.format("Input Stream is null for root '%s' and path '%s'", ROOT_CONTAINER, path)); + } + } + + protected CloudBlobContainer getContainer(String name, boolean createIfNotExists) throws Exception { + CloudBlobContainer containerReference = getCloudBlobClient().getContainerReference(name); + if (createIfNotExists) { + containerReference.createIfNotExists(); + } + return containerReference; + } + + public List listAllReports() throws Exception { + getCloudBlobClient(); + Iterable migrationBlobs = cloudBlobClient.getContainerReference(ROOT_CONTAINER).listBlobs(); + List result = new ArrayList<>(); + migrationBlobs.forEach(blob -> { + result.add((CloudBlockBlob) blob); + }); + return result; + } + + public byte[] getReport(String reportId) throws Exception { + checkReportIdValid(reportId); + CloudBlob blob = cloudBlobClient.getContainerReference(ROOT_CONTAINER).getBlobReferenceFromServer(reportId); + byte[] output = new byte[blob.getStreamWriteSizeInBytes()]; + blob.downloadToByteArray(output, 0); + return output; + } + + private void checkReportIdValid(String reportId) { + NameValidator.validateFileName(reportId); + if (StringUtils.contains(reportId, "/")) { + throw new IllegalArgumentException("Invalid report id provided"); + } + if (!StringUtils.endsWith(reportId, ".json") && !StringUtils.endsWith(reportId, ".sql")) { + throw new IllegalArgumentException("Invalid file name ending provided"); + } + } + + protected CloudBlobClient getCloudBlobClient() throws Exception { + if (cloudBlobClient == null) { + init(); + } + return cloudBlobClient; + } + + @Override + public boolean validateConnection() { + try { + getCloudBlobClient().listContainers(); + } catch (Exception e) { + return false; + } + return true; + } + + public void setMigrationContext(MigrationContext migrationContext) { + this.migrationContext = migrationContext; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseCopyTaskRepository.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseCopyTaskRepository.java new file mode 100644 index 0000000..bcf92dc --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseCopyTaskRepository.java @@ -0,0 +1,340 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.service.impl; + +import com.google.gson.Gson; +import com.google.gson.reflect.TypeToken; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceCategory; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceRecorder; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceUnit; +import de.hybris.platform.servicelayer.cluster.ClusterService; +import org.apache.commons.lang3.StringUtils; +import com.sap.cx.boosters.commercedbsync.MigrationProgress; +import com.sap.cx.boosters.commercedbsync.MigrationStatus; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.service.DatabaseCopyTask; +import com.sap.cx.boosters.commercedbsync.service.DatabaseCopyTaskRepository; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Timestamp; +import java.time.Instant; +import java.time.LocalDateTime; +import java.time.OffsetDateTime; +import java.util.Calendar; +import java.util.Collections; +import java.util.HashSet; +import java.util.Map; +import java.util.Objects; +import java.util.Set; +import java.util.TimeZone; + + +/** + * Repository to manage the status on of the migration copy tasks across the cluster + */ +public class DefaultDatabaseCopyTaskRepository implements DatabaseCopyTaskRepository { + + private ClusterService clusterService; + + @Override + public void createMigrationStatus(CopyContext context) throws Exception { + String insert = "INSERT INTO MIGRATIONTOOLKIT_TABLECOPYSTATUS (migrationId, total) VALUES (?, ?)"; + try (Connection conn = getConnection(context); + PreparedStatement stmt = conn.prepareStatement(insert) + ) { + stmt.setObject(1, context.getMigrationId()); + stmt.setObject(2, context.getCopyItems().size()); + stmt.executeUpdate(); + conn.commit(); + } + } + + @Override + public void setMigrationStatus(CopyContext context, MigrationProgress progress) throws Exception { + setMigrationStatus(context, MigrationProgress.RUNNING, progress); + } + + @Override + public void setMigrationStatus(CopyContext context, MigrationProgress from, MigrationProgress to) throws Exception { + + String update = "UPDATE MIGRATIONTOOLKIT_TABLECOPYSTATUS SET status = ? WHERE status = ? AND migrationId = ?"; + try (Connection conn = getConnection(context); + PreparedStatement stmt = conn.prepareStatement(update) + ) { + stmt.setObject(1, to.name()); + stmt.setObject(2, from.name()); + stmt.setObject(3, context.getMigrationId()); + stmt.executeUpdate(); + conn.commit(); + } + } + + + @Override + public MigrationStatus getMigrationStatus(CopyContext context) throws Exception { + String query = "SELECT * FROM MIGRATIONTOOLKIT_TABLECOPYSTATUS WHERE migrationId = ?"; + try (Connection conn = getConnection(context); + PreparedStatement stmt = conn.prepareStatement(query) + ) { + stmt.setObject(1, context.getMigrationId()); + try (ResultSet rs = stmt.executeQuery()) { + rs.next(); + return convertToStatus(rs); + } + } + } + + /** + * @param rs result set to covert + * @return the equivalent Migration Status + * @throws Exception + */ + private MigrationStatus convertToStatus(ResultSet rs) throws Exception { + MigrationStatus status = new MigrationStatus(); + status.setMigrationID(rs.getString("migrationId")); + status.setStart(getDateTime(rs, "startAt")); + status.setEnd(getDateTime(rs, "endAt")); + status.setLastUpdate(getDateTime(rs, "lastUpdate")); + status.setTotalTasks(rs.getInt("total")); + status.setCompletedTasks(rs.getInt("completed")); + status.setFailedTasks(rs.getInt("failed")); + status.setStatus(MigrationProgress.valueOf(rs.getString("status"))); + + status.setCompleted(status.getTotalTasks() == status.getCompletedTasks() || MigrationProgress.STALLED.equals(status.getStatus())); + status.setFailed(status.getFailedTasks() > 0 || MigrationProgress.STALLED.equals(status.getStatus())); + status.setStatusUpdates(Collections.emptyList()); + + return status; + } + + private LocalDateTime getDateTime(ResultSet rs, String column) throws Exception { + Timestamp ts = rs.getObject(column, Timestamp.class); + return ts == null ? null : ts.toLocalDateTime(); + } + + + @Override + public void scheduleTask(CopyContext context, CopyContext.DataCopyItem copyItem, long sourceRowCount, int targetNode) throws Exception { + String insert = "INSERT INTO MIGRATIONTOOLKIT_TABLECOPYTASKS (targetnodeid, pipelinename, sourcetablename, targettablename, columnmap, migrationid, sourcerowcount, lastupdate) VALUES (?, ?, ?, ?, ?, ?, ?, ?)"; + try (Connection conn = getConnection(context); + PreparedStatement stmt = conn.prepareStatement(insert) + ) { + stmt.setObject(1, targetNode); + stmt.setObject(2, copyItem.getPipelineName()); + stmt.setObject(3, copyItem.getSourceItem()); + stmt.setObject(4, copyItem.getTargetItem()); + stmt.setObject(5, new Gson().toJson(copyItem.getColumnMap())); + stmt.setObject(6, context.getMigrationId()); + stmt.setObject(7, sourceRowCount); + setTimestamp(stmt, 8, now()); + stmt.executeUpdate(); + conn.commit(); + } + } + + private Timestamp now() { + Instant now = java.time.Instant.now(); + Timestamp ts = new Timestamp(now.toEpochMilli()); + return ts; + } + + private Connection getConnection(CopyContext context) throws Exception { + return context.getMigrationContext().getDataTargetRepository().getConnection(); + } + + + @Override + public Set findPendingTasks(CopyContext context) throws Exception { + String sql = "SELECT * from MIGRATIONTOOLKIT_TABLECOPYTASKS WHERE targetnodeid=? AND migrationid=? AND duration IS NULL ORDER BY sourcerowcount"; + try (Connection connection = getConnection(context); + PreparedStatement stmt = connection.prepareStatement(sql) + ) { + stmt.setObject(1, getTargetNodeId()); + stmt.setObject(2, context.getMigrationId()); + try (ResultSet resultSet = stmt.executeQuery()) { + return convertToTask(resultSet); + } + } + } + + @Override + public void updateTaskProgress(CopyContext context, CopyContext.DataCopyItem copyItem, long itemCount) throws Exception { + String sql = "UPDATE MIGRATIONTOOLKIT_TABLECOPYTASKS " + + "SET targetrowcount=?, " + + "lastupdate=?, " + + "avgwriterrowthroughput=?, " + + "avgreaderrowthroughput=? " + + "WHERE targetnodeid=? " + + "AND migrationid=? " + + "AND pipelinename=?"; + try (Connection connection = getConnection(context); + PreparedStatement stmt = connection.prepareStatement(sql)) { + stmt.setObject(1, itemCount); + setTimestamp(stmt, 2, now()); + stmt.setObject(3, getAvgPerformanceValue(context, PerformanceCategory.DB_WRITE, copyItem.getTargetItem())); + stmt.setObject(4, getAvgPerformanceValue(context, PerformanceCategory.DB_READ, copyItem.getSourceItem())); + stmt.setObject(5, getTargetNodeId()); + stmt.setObject(6, context.getMigrationId()); + stmt.setObject(7, copyItem.getPipelineName()); + stmt.executeUpdate(); + connection.commit(); + } + } + + protected void setTimestamp(PreparedStatement stmt, int i, Timestamp ts) throws SQLException { + stmt.setTimestamp(i, ts, Calendar.getInstance(TimeZone.getTimeZone("UTC"))); + } + + public void markTaskCompleted(final CopyContext context, final CopyContext.DataCopyItem copyItem, + final String duration) throws Exception { + markTaskCompleted(context, copyItem, duration, 0); + } + @Override + // ORACLE_TARGET - added durationInseconds + public void markTaskCompleted(final CopyContext context, final CopyContext.DataCopyItem copyItem, + final String duration, final float durationseconds) throws Exception { + Objects.requireNonNull(duration, "duration must not be null"); + String sql = "UPDATE MIGRATIONTOOLKIT_TABLECOPYTASKS " + + "SET duration=?, " + + "lastupdate=?, " + + "avgwriterrowthroughput=?, " + + "avgreaderrowthroughput=?, " + + "durationinseconds=? " + + "WHERE targetnodeid=? " + + "AND migrationid=? " + + "AND pipelinename=? " + + "AND duration IS NULL"; + try (Connection connection = getConnection(context); + PreparedStatement stmt = connection.prepareStatement(sql)) { + stmt.setObject(1, duration); + setTimestamp(stmt, 2, now()); + stmt.setObject(3, getAvgPerformanceValue(context, PerformanceCategory.DB_WRITE, copyItem.getTargetItem())); + stmt.setObject(4, getAvgPerformanceValue(context, PerformanceCategory.DB_READ, copyItem.getSourceItem())); + // ORACLE_TARGET - added durationInseconds + stmt.setFloat(5, durationseconds); + stmt.setObject(6, getTargetNodeId()); + stmt.setObject(7, context.getMigrationId()); + stmt.setObject(8, copyItem.getPipelineName()); + stmt.executeUpdate(); + connection.commit(); + } + mutePerformanceRecorder(context, copyItem); + } + + @Override + public void markTaskFailed(CopyContext context, CopyContext.DataCopyItem copyItem, Exception error) throws Exception { + String sql = "UPDATE MIGRATIONTOOLKIT_TABLECOPYTASKS " + + "SET failure='1', duration='-1', " + + "error=?, " + + "lastupdate=? " + + "WHERE targetnodeid=? " + + "AND migrationId=? " + + "AND pipelinename=? " + + "AND failure = '0'"; + try (Connection connection = getConnection(context); + PreparedStatement stmt = connection.prepareStatement(sql)) { + String errorMsg = error.getMessage(); + if (StringUtils.isBlank(errorMsg)) { + errorMsg = error.getClass().getName(); + } + stmt.setObject(1, errorMsg.trim()); + setTimestamp(stmt, 2, now()); + stmt.setObject(3, getTargetNodeId()); + stmt.setObject(4, context.getMigrationId()); + stmt.setObject(5, copyItem.getPipelineName()); + stmt.executeUpdate(); + connection.commit(); + } + mutePerformanceRecorder(context, copyItem); + } + + @Override + public Set getUpdatedTasks(CopyContext context, OffsetDateTime since) throws Exception { + String sql = "select * from MIGRATIONTOOLKIT_TABLECOPYTASKS WHERE migrationid=? AND lastupdate >= ?"; + try (Connection connection = getConnection(context); + PreparedStatement stmt = connection.prepareStatement(sql); + ) { + stmt.setObject(1, context.getMigrationId()); + setTimestamp(stmt, 2, toTimestamp(since)); + try (ResultSet resultSet = stmt.executeQuery()) { + return convertToTask(resultSet); + } + } + } + + private Timestamp toTimestamp(OffsetDateTime ts) { + return new Timestamp(ts.toInstant().toEpochMilli()); + } + + @Override + public Set getAllTasks(CopyContext context) throws Exception { + String sql = "select * from MIGRATIONTOOLKIT_TABLECOPYTASKS WHERE migrationid=?"; + try (Connection connection = getConnection(context); + PreparedStatement stmt = connection.prepareStatement(sql); + ) { + stmt.setObject(1, context.getMigrationId()); + try (ResultSet resultSet = stmt.executeQuery()) { + return convertToTask(resultSet); + } + } + } + + private int getTargetNodeId() { + return clusterService.getClusterId(); + } + + public void setClusterService(ClusterService clusterService) { + this.clusterService = clusterService; + } + + + private Set convertToTask(ResultSet rs) throws Exception { + Set copyTasks = new HashSet<>(); + while (rs.next()) { + DatabaseCopyTask copyTask = new DatabaseCopyTask(); + copyTask.setTargetnodeId(rs.getInt("targetnodeId")); + copyTask.setMigrationId(rs.getString("migrationId")); + copyTask.setPipelinename(rs.getString("pipelinename")); + copyTask.setSourcetablename(rs.getString("sourcetablename")); + copyTask.setTargettablename(rs.getString("targettablename")); + copyTask.setColumnmap(new Gson().fromJson(rs.getString("columnmap"), new TypeToken>() { + }.getType())); + copyTask.setDuration(rs.getString("duration")); + copyTask.setCompleted(copyTask.getDuration() != null); + copyTask.setSourcerowcount(rs.getLong("sourcerowcount")); + copyTask.setTargetrowcount(rs.getLong("targetrowcount")); + copyTask.setFailure(rs.getBoolean("failure")); + copyTask.setError(rs.getString("error")); + copyTask.setLastUpdate(getDateTime(rs, "lastupdate")); + copyTask.setAvgReaderRowThroughput(rs.getDouble("avgreaderrowthroughput")); + copyTask.setAvgWriterRowThroughput(rs.getDouble("avgwriterrowthroughput")); + // ORACLE_TARGET + copyTask.setDurationinseconds(rs.getDouble("durationinseconds")); + copyTasks.add(copyTask); + } + return copyTasks; + } + + private double getAvgPerformanceValue(CopyContext context, PerformanceCategory category, String tableName) { + PerformanceRecorder recorder = context.getPerformanceProfiler().getRecorder(category, tableName); + if (recorder != null) { + PerformanceRecorder.PerformanceAggregation performanceAggregation = recorder.getRecords().get(PerformanceUnit.ROWS); + if (performanceAggregation != null) { + return performanceAggregation.getAvgThroughput().get(); + } + } + return 0; + } + + private void mutePerformanceRecorder(CopyContext context, CopyContext.DataCopyItem copyItem) { + context.getPerformanceProfiler().muteRecorder(PerformanceCategory.DB_READ, copyItem.getSourceItem()); + context.getPerformanceProfiler().muteRecorder(PerformanceCategory.DB_WRITE, copyItem.getTargetItem()); + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseMigrationDataTypeMapperService.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseMigrationDataTypeMapperService.java new file mode 100644 index 0000000..cdee576 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseMigrationDataTypeMapperService.java @@ -0,0 +1,64 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.service.impl; + +import com.google.common.io.ByteStreams; +import org.apache.commons.io.IOUtils; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationDataTypeMapperService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.Reader; +import java.io.StringWriter; +import java.sql.Blob; +import java.sql.Clob; +import java.sql.NClob; +import java.sql.SQLException; +import java.sql.Types; + +/** + * + */ +public class DefaultDatabaseMigrationDataTypeMapperService implements DatabaseMigrationDataTypeMapperService { + + private static final Logger LOG = LoggerFactory.getLogger(DefaultDatabaseMigrationDataTypeMapperService.class); + + @Override + public Object dataTypeMapper(final Object sourceColumnValue, final int jdbcType) + throws IOException, SQLException { + Object targetColumnValue = sourceColumnValue; + if (sourceColumnValue == null) { + // do nothing + } else if (jdbcType == Types.BLOB) { + targetColumnValue = new ByteArrayInputStream(ByteStreams.toByteArray(((Blob) sourceColumnValue).getBinaryStream())); + } else if (jdbcType == Types.NCLOB) { + targetColumnValue = getValue((NClob) sourceColumnValue); + } else if (jdbcType == Types.CLOB) { + targetColumnValue = getValue((Clob) sourceColumnValue); + } + return targetColumnValue; + } + + private String getValue(final NClob nClob) throws SQLException, IOException { + return getValue(nClob.getCharacterStream()); + } + + private String getValue(final Clob clob) throws SQLException, IOException { + return getValue(clob.getCharacterStream()); + } + + private String getValue(final Reader in) throws SQLException, IOException { + final StringWriter w = new StringWriter(); + IOUtils.copy(in, w); + String value = w.toString(); + w.close(); + in.close(); + return value; + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseMigrationReportService.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseMigrationReportService.java new file mode 100644 index 0000000..993cddf --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseMigrationReportService.java @@ -0,0 +1,77 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.service.impl; + +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import com.sap.cx.boosters.commercedbsync.scheduler.DatabaseCopyScheduler; +import com.sap.cx.boosters.commercedbsync.utils.MaskUtil; +import de.hybris.platform.servicelayer.config.ConfigurationService; +import org.apache.commons.configuration.Configuration; +import com.sap.cx.boosters.commercedbsync.MigrationReport; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.service.DatabaseCopyTaskRepository; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationReportService; + +import java.time.OffsetDateTime; +import java.util.Arrays; +import java.util.Iterator; +import java.util.Set; +import java.util.SortedMap; +import java.util.TreeMap; +import java.util.stream.Collectors; + +public class DefaultDatabaseMigrationReportService implements DatabaseMigrationReportService { + + private DatabaseCopyScheduler databaseCopyScheduler; + private DatabaseCopyTaskRepository databaseCopyTaskRepository; + private ConfigurationService configurationService; + + @Override + public MigrationReport getMigrationReport(CopyContext copyContext) throws Exception { + final MigrationReport migrationReport = new MigrationReport(); + migrationReport.setMigrationID(copyContext.getMigrationId()); + populateConfiguration(migrationReport); + migrationReport.setMigrationStatus(databaseCopyScheduler.getCurrentState(copyContext, OffsetDateTime.MAX)); + migrationReport.setDatabaseCopyTasks(databaseCopyTaskRepository.getAllTasks(copyContext)); + return migrationReport; + } + + private void populateConfiguration(MigrationReport migrationReport) { + final SortedMap configuration = new TreeMap<>(); + final Configuration config = configurationService.getConfiguration(); + final Configuration subset = config.subset(CommercedbsyncConstants.PROPERTIES_PREFIX); + final Set maskedProperties = Arrays.stream(config.getString(CommercedbsyncConstants.MIGRATION_REPORT_MASKED_PROPERTIES) + .split(",")).collect(Collectors.toSet()); + + final Iterator keys = subset.getKeys(); + + while (keys.hasNext()) { + final String key = keys.next(); + final String prefixedKey = CommercedbsyncConstants.PROPERTIES_PREFIX + "." + key; + + if (CommercedbsyncConstants.MIGRATION_REPORT_MASKED_PROPERTIES.equals(prefixedKey)) { + continue; + } + + configuration.put(prefixedKey, maskedProperties.contains(prefixedKey) ? CommercedbsyncConstants.MASKED_VALUE : MaskUtil.stripJdbcPassword(subset.getString(key))); + } + + migrationReport.setConfiguration(configuration); + } + + public void setDatabaseCopyScheduler(DatabaseCopyScheduler databaseCopyScheduler) { + this.databaseCopyScheduler = databaseCopyScheduler; + } + + public void setDatabaseCopyTaskRepository(DatabaseCopyTaskRepository databaseCopyTaskRepository) { + this.databaseCopyTaskRepository = databaseCopyTaskRepository; + } + + public void setConfigurationService(ConfigurationService configurationService) { + this.configurationService = configurationService; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseMigrationService.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseMigrationService.java new file mode 100644 index 0000000..ab56cdd --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseMigrationService.java @@ -0,0 +1,127 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsync.service.impl; + +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceProfiler; +import com.sap.cx.boosters.commercedbsync.scheduler.DatabaseCopyScheduler; +import com.sap.cx.boosters.commercedbsync.MigrationReport; +import com.sap.cx.boosters.commercedbsync.MigrationStatus; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.context.validation.MigrationContextValidator; +import com.sap.cx.boosters.commercedbsync.provider.CopyItemProvider; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationReportService; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationService; +import com.sap.cx.boosters.commercedbsync.service.DatabaseSchemaDifferenceService; +import org.slf4j.MDC; + +import java.time.OffsetDateTime; +import java.util.Set; +import java.util.UUID; + +public class DefaultDatabaseMigrationService implements DatabaseMigrationService { + + private DatabaseCopyScheduler databaseCopyScheduler; + private CopyItemProvider copyItemProvider; + private PerformanceProfiler performanceProfiler; + private DatabaseMigrationReportService databaseMigrationReportService; + private DatabaseSchemaDifferenceService schemaDifferenceService; + private MigrationContextValidator migrationContextValidator; + + @Override + public String startMigration(final MigrationContext context) throws Exception { + migrationContextValidator.validateContext(context); + + // TODO: running migration check + performanceProfiler.reset(); + + final String migrationId = UUID.randomUUID().toString(); + + MDC.put(CommercedbsyncConstants.MDC_MIGRATIONID, migrationId); + + if (context.isSchemaMigrationEnabled() && context.isSchemaMigrationAutoTriggerEnabled()) { + schemaDifferenceService.executeSchemaDifferences(context); + } + + CopyContext copyContext = buildCopyContext(context, migrationId); + databaseCopyScheduler.schedule(copyContext); + + return migrationId; + } + + @Override + public void stopMigration(MigrationContext context, String migrationID) throws Exception { + CopyContext copyContext = buildIdContext(context, migrationID); + databaseCopyScheduler.abort(copyContext); + } + + private CopyContext buildCopyContext(MigrationContext context, String migrationID) throws Exception { + Set dataCopyItems = copyItemProvider.get(context); + return new CopyContext(migrationID, context, dataCopyItems, performanceProfiler); + } + + private CopyContext buildIdContext(MigrationContext context, String migrationID) throws Exception { + //we use a lean implementation of the copy context to avoid calling the provider which is not required for task management. + return new CopyContext.IdCopyContext(migrationID, context, performanceProfiler); + } + + @Override + public MigrationStatus getMigrationState(MigrationContext context, String migrationID) throws Exception { + return getMigrationState(context, migrationID, OffsetDateTime.MAX); + } + + @Override + public MigrationStatus getMigrationState(MigrationContext context, String migrationID, OffsetDateTime since) throws Exception { + CopyContext copyContext = buildIdContext(context, migrationID); + return databaseCopyScheduler.getCurrentState(copyContext, since); + } + + @Override + public MigrationReport getMigrationReport(MigrationContext context, String migrationID) throws Exception { + CopyContext copyContext = buildIdContext(context, migrationID); + return databaseMigrationReportService.getMigrationReport(copyContext); + } + + @Override + public MigrationStatus waitForFinish(MigrationContext context, String migrationID) throws Exception { + MigrationStatus status; + do { + status = getMigrationState(context, migrationID); + Thread.sleep(5000); + } while (!status.isCompleted()); + + if (status.isFailed()) { + throw new Exception("Database migration failed"); + } + + return status; + } + + public void setDatabaseCopyScheduler(DatabaseCopyScheduler databaseCopyScheduler) { + this.databaseCopyScheduler = databaseCopyScheduler; + } + + public void setCopyItemProvider(CopyItemProvider copyItemProvider) { + this.copyItemProvider = copyItemProvider; + } + + public void setPerformanceProfiler(PerformanceProfiler performanceProfiler) { + this.performanceProfiler = performanceProfiler; + } + + public void setDatabaseMigrationReportService(DatabaseMigrationReportService databaseMigrationReportService) { + this.databaseMigrationReportService = databaseMigrationReportService; + } + + public void setSchemaDifferenceService(DatabaseSchemaDifferenceService schemaDifferenceService) { + this.schemaDifferenceService = schemaDifferenceService; + } + + public void setMigrationContextValidator(MigrationContextValidator migrationContextValidator) { + this.migrationContextValidator = migrationContextValidator; + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseMigrationSynonymService.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseMigrationSynonymService.java new file mode 100644 index 0000000..990dc1a --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseMigrationSynonymService.java @@ -0,0 +1,34 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.service.impl; + +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import com.sap.cx.boosters.commercedbsync.repository.DataRepository; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationSynonymService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class DefaultDatabaseMigrationSynonymService implements DatabaseMigrationSynonymService { + + private static final Logger LOG = LoggerFactory.getLogger(DefaultDatabaseMigrationSynonymService.class); + + private static final String YDEPLOYMENTS = CommercedbsyncConstants.DEPLOYMENTS_TABLE; + private static final String ATTRDESCRIPTORS = "attributedescriptors"; + + + @Override + public void recreateSynonyms(DataRepository repository, String prefix) throws Exception { + recreateSynonym(repository, YDEPLOYMENTS, prefix); + recreateSynonym(repository, ATTRDESCRIPTORS, prefix); + } + + private void recreateSynonym(DataRepository repository, String table, String actualPrefix) throws Exception { + LOG.info("Creating Synonym for {} on {}{}", table, actualPrefix, table); + repository.executeUpdateAndCommit(String.format("DROP SYNONYM IF EXISTS %s", table)); + repository.executeUpdateAndCommit(String.format("CREATE SYNONYM %s FOR %s%s", table, actualPrefix, table)); + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseSchemaDifferenceService.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseSchemaDifferenceService.java new file mode 100644 index 0000000..b64c800 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/DefaultDatabaseSchemaDifferenceService.java @@ -0,0 +1,567 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.service.impl; + +import com.google.common.base.Preconditions; +import com.google.common.collect.ArrayListMultimap; +import com.google.common.collect.ListMultimap; +import com.google.gson.Gson; +import com.google.gson.GsonBuilder; +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import de.hybris.platform.servicelayer.config.ConfigurationService; +import org.apache.commons.lang.StringUtils; +import org.apache.commons.lang3.ObjectUtils; +import org.apache.ddlutils.Platform; +import org.apache.ddlutils.model.Column; +import org.apache.ddlutils.model.Database; +import org.apache.ddlutils.model.Table; +import com.sap.cx.boosters.commercedbsync.TableCandidate; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.filter.DataCopyTableFilter; +import com.sap.cx.boosters.commercedbsync.provider.CopyItemProvider; +import com.sap.cx.boosters.commercedbsync.repository.DataRepository; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationReportStorageService; +import com.sap.cx.boosters.commercedbsync.service.DatabaseSchemaDifferenceService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.ByteArrayInputStream; +import java.nio.charset.StandardCharsets; +import java.sql.Connection; +import java.time.LocalDateTime; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Objects; +import java.util.Set; +import java.util.stream.Collectors; +import java.util.stream.Stream; + +public class DefaultDatabaseSchemaDifferenceService implements DatabaseSchemaDifferenceService { + private static final Logger LOG = LoggerFactory.getLogger(DefaultDatabaseSchemaDifferenceService.class); + + private DataCopyTableFilter dataCopyTableFilter; + private DatabaseMigrationReportStorageService databaseMigrationReportStorageService; + private CopyItemProvider copyItemProvider; + private ConfigurationService configurationService; + + @Override + public String generateSchemaDifferencesSql(MigrationContext context) throws Exception { + final int maxStageMigrations = context.getMaxTargetStagedMigrations(); + final Set stagingPrefixes = findStagingPrefixes(context); + String schemaSql = ""; + if (stagingPrefixes.size() > maxStageMigrations) { + final Database databaseModelWithChanges = getDatabaseModelWithChanges4TableDrop(context); + LOG.info("generateSchemaDifferencesSql..getDatabaseModelWithChanges4TableDrop.. - calibrating changes "); + schemaSql = context.getDataTargetRepository().asPlatform().getDropTablesSql(databaseModelWithChanges, true); + LOG.info("generateSchemaDifferencesSql - generated DDL SQLs for DROP. "); + } else { + LOG.info( + "generateSchemaDifferencesSql..getDatabaseModelWithChanges4TableCreation - calibrating Schema changes "); + final DatabaseStatus databaseModelWithChanges = getDatabaseModelWithChanges4TableCreation(context); + if (databaseModelWithChanges.isHasSchemaDiff()) { + LOG.info("generateSchemaDifferencesSql..Schema Diff found - now to generate the SQLs "); + if (context.getDataTargetRepository().getDatabaseProvider().isHanaUsed()){ + schemaSql = context.getDataTargetRepository().asPlatform() + .getAlterTablesSql(null ,context.getDataTargetRepository().getDataSourceConfiguration().getSchema(),null,databaseModelWithChanges.getDatabase()); + } else { + schemaSql = context.getDataTargetRepository().asPlatform() + .getAlterTablesSql(databaseModelWithChanges.getDatabase()); + } + + schemaSql = postProcess(schemaSql, context); + LOG.info("generateSchemaDifferencesSql - generated DDL ALTER SQLs. "); + } + + } + + return schemaSql; + } + + /* + * ORACLE_TARGET - START This a TEMP fix, it is difficlt to get from from + * Sql Server NVARCHAR(255), NVARCHAR(MAX) to convert properly into to + * Orcale's VARCHAR2(255) and CLOB respectively. Therefore when the schema + * script output has VARCHAR2(2147483647) which is from SqlServer's + * NVARCHAR(max), then we just make it CLOB. Alternatively check if + * something can be done via the mappings in OracleDataRepository. + */ + private String postProcess(String schemaSql, final MigrationContext context) { + if (context.getDataTargetRepository().getDatabaseProvider().isOracleUsed()) { + schemaSql = schemaSql.replaceAll(CommercedbsyncConstants.MIGRATION_ORACLE_MAX, + CommercedbsyncConstants.MIGRATION_ORACLE_VARCHAR24k); + // another odd character that comes un in the SQL + LOG.info("Changing the NVARCHAR2 " + schemaSql); + schemaSql = schemaSql.replaceAll("NUMBER\\(10,0\\) DEFAULT \'\'\'\'\'\'", "NUMBER(10,0) DEFAULT 0"); + } + return schemaSql; + } + // ORACLE_TARGET - END + + @Override + public void executeSchemaDifferencesSql(final MigrationContext context, final String sql) throws Exception { + + if (!context.isSchemaMigrationEnabled()) { + throw new RuntimeException("Schema migration is disabled. Check property:" + + CommercedbsyncConstants.MIGRATION_SCHEMA_ENABLED); + } + + final Platform platform = context.getDataTargetRepository().asPlatform(); + final boolean continueOnError = false; + final Connection connection = platform.borrowConnection(); + try { + platform.evaluateBatch(connection, sql, continueOnError); + LOG.info("Executed the following sql to change the schema:\n" + sql); + writeReport(context, sql); + } catch (final Exception e) { + throw new RuntimeException("Could not execute Schema Diff Script", e); + } finally { + platform.returnConnection(connection); + } + } + + @Override + public void executeSchemaDifferences(final MigrationContext context) throws Exception { + executeSchemaDifferencesSql(context, generateSchemaDifferencesSql(context)); + } + + private Set findDuplicateTables(final MigrationContext migrationContext) { + try { + final Set stagingPrefixes = findStagingPrefixes(migrationContext); + final Set targetSet = migrationContext.getDataTargetRepository().getAllTableNames(); + return targetSet.stream() + .filter(t -> stagingPrefixes.stream().anyMatch(p -> StringUtils.startsWithIgnoreCase(t, p))) + .collect(Collectors.toSet()); + } catch (final Exception e) { + LOG.error("Error occurred while trying to find duplicate tables", e); + } + return Collections.EMPTY_SET; + } + + private Set findStagingPrefixes(final MigrationContext context) throws Exception { + final String currentSystemPrefix = configurationService.getConfiguration().getString("db.tableprefix"); + final String currentMigrationPrefix = context.getDataTargetRepository().getDataSourceConfiguration() + .getTablePrefix(); + final Set targetSet = context.getDataTargetRepository().getAllTableNames(); + final String deploymentsTable = CommercedbsyncConstants.DEPLOYMENTS_TABLE; + final Set detectedPrefixes = targetSet.stream().filter(t -> t.toLowerCase().endsWith(deploymentsTable)) + .filter(t -> !StringUtils.equalsIgnoreCase(t, currentSystemPrefix + deploymentsTable)) + .filter(t -> !StringUtils.equalsIgnoreCase(t, currentMigrationPrefix + deploymentsTable)) + .map(t -> StringUtils.removeEndIgnoreCase(t, deploymentsTable)).collect(Collectors.toSet()); + return detectedPrefixes; + + } + + private Database getDatabaseModelWithChanges4TableDrop(final MigrationContext context) { + final Set duplicateTables = findDuplicateTables(context); + final Database database = context.getDataTargetRepository().asDatabase(true); + // clear tables and add only the ones to be removed + final Table[] tables = database.getTables(); + Stream.of(tables).forEach(t -> { + database.removeTable(t); + }); + duplicateTables.forEach(t -> { + final Table table = ObjectUtils.defaultIfNull(database.findTable(t), new Table()); + table.setName(t); + database.addTable(table); + }); + return database; + } + + protected DatabaseStatus getDatabaseModelWithChanges4TableCreation(final MigrationContext migrationContext) + throws Exception { + final DatabaseStatus dbStatus = new DatabaseStatus(); + + final SchemaDifferenceResult differenceResult = getDifference(migrationContext); + if (!differenceResult.hasDifferences()) { + LOG.info("getDatabaseModelWithChanges4TableCreation - No Difference found in schema "); + dbStatus.setDatabase(migrationContext.getDataTargetRepository().asDatabase()); + dbStatus.setHasSchemaDiff(false); + return dbStatus; + } + final SchemaDifference targetDiff = differenceResult.getTargetSchema(); + final Database database = targetDiff.getDatabase(); + + // add missing tables in target + if (migrationContext.isAddMissingTablesToSchemaEnabled()) { + final List missingTables = targetDiff.getMissingTables(); + for (final TableKeyPair missingTable : missingTables) { + final Table tableClone = (Table) differenceResult.getSourceSchema().getDatabase() + .findTable(missingTable.getLeftName(), false).clone(); + tableClone.setName(missingTable.getRightName()); + tableClone.setCatalog( + migrationContext.getDataTargetRepository().getDataSourceConfiguration().getCatalog()); + tableClone + .setSchema(migrationContext.getDataTargetRepository().getDataSourceConfiguration().getSchema()); + database.addTable(tableClone); + LOG.info("getDatabaseModelWithChanges4TableCreation - missingTable.getRightName() =" + + missingTable.getRightName() + ", missingTable.getLeftName() = " + missingTable.getLeftName()); + } + } + + // add missing columns in target + if (migrationContext.isAddMissingColumnsToSchemaEnabled()) { + final ListMultimap missingColumnsInTable = targetDiff.getMissingColumnsInTable(); + for (final TableKeyPair missingColumnsTable : missingColumnsInTable.keySet()) { + final List columns = missingColumnsInTable.get(missingColumnsTable); + for (final String missingColumn : columns) { + final Table missingColumnsTableModel = differenceResult.getSourceSchema().getDatabase() + .findTable(missingColumnsTable.getLeftName(), false); + final Column columnClone = (Column) missingColumnsTableModel.findColumn(missingColumn, false) + .clone(); + LOG.info(" Column " + columnClone.getName() + ", Type = " + columnClone.getType() + ", Type Code " + + columnClone.getTypeCode() + ",size " + columnClone.getSize() + ", size as int " + + columnClone.getSizeAsInt()); + // columnClone.set + final Table table = database.findTable(missingColumnsTable.getRightName(), false); + Preconditions.checkState(table != null, "Data inconsistency: Table must exist."); + table.addColumn(columnClone); + } + } + } + + //remove superfluous tables in target + if (migrationContext.isRemoveMissingTablesToSchemaEnabled()) { + throw new UnsupportedOperationException("not yet implemented"); + } + + // remove superfluous columns in target + if (migrationContext.isRemoveMissingColumnsToSchemaEnabled()) { + final ListMultimap superfluousColumnsInTable = differenceResult.getSourceSchema() + .getMissingColumnsInTable(); + for (final TableKeyPair superfluousColumnsTable : superfluousColumnsInTable.keySet()) { + final List columns = superfluousColumnsInTable.get(superfluousColumnsTable); + for (final String superfluousColumn : columns) { + final Table table = database.findTable(superfluousColumnsTable.getLeftName(), false); + Preconditions.checkState(table != null, "Data inconsistency: Table must exist."); + final Column columnToBeRemoved = table.findColumn(superfluousColumn, false); + // remove indices in case column is part of one + Stream.of(table.getIndices()).filter(i -> i.hasColumn(columnToBeRemoved)) + .forEach(i -> table.removeIndex(i)); + table.removeColumn(columnToBeRemoved); + } + } + } + dbStatus.setDatabase(database); + dbStatus.setHasSchemaDiff(true); + LOG.info("getDatabaseModelWithChanges4TableCreation Schema Diff found - done "); + return dbStatus; + } + + protected void writeReport(final MigrationContext migrationContext, final String differenceSql) { + try { + final String fileName = String.format("schemaChanges-%s.sql", LocalDateTime.now().getNano()); + databaseMigrationReportStorageService.store(fileName, + new ByteArrayInputStream(differenceSql.getBytes(StandardCharsets.UTF_8))); + } catch (final Exception e) { + LOG.error("Error executing writing diff report", e); + } + } + + @Override + public SchemaDifferenceResult getDifference(final MigrationContext migrationContext) throws Exception { + try { + LOG.info("reading source database model ..."); + migrationContext.getDataSourceRepository().asDatabase(true); + LOG.info("reading target database model ..."); + migrationContext.getDataTargetRepository().asDatabase(true); + + LOG.info("computing SCHEMA diff, REF DB = " + + migrationContext.getDataTargetRepository().getDatabaseProvider().getDbName() + + "vs Checking in DB = " + + migrationContext.getDataSourceRepository().getDatabaseProvider().getDbName()); + final Set targetTableCandidates = copyItemProvider + .getTargetTableCandidates(migrationContext); + final SchemaDifference sourceSchemaDifference = computeDiff(migrationContext, + migrationContext.getDataTargetRepository(), migrationContext.getDataSourceRepository(), + targetTableCandidates); + LOG.info("compute SCHMEA diff, REF DB =" + + migrationContext.getDataSourceRepository().getDatabaseProvider().getDbName() + + "vs Checking in DB = " + + migrationContext.getDataTargetRepository().getDatabaseProvider().getDbName()); + final Set sourceTableCandidates = copyItemProvider + .getSourceTableCandidates(migrationContext); + final SchemaDifference targetSchemaDifference = computeDiff(migrationContext, + migrationContext.getDataSourceRepository(), migrationContext.getDataTargetRepository(), + sourceTableCandidates); + final SchemaDifferenceResult schemaDifferenceResult = new SchemaDifferenceResult(sourceSchemaDifference, + targetSchemaDifference); + LOG.info("Diff finished. Differences detected: " + schemaDifferenceResult.hasDifferences()); + + return schemaDifferenceResult; + } catch (final Exception e) { + throw new RuntimeException("Error computing schema diff", e); + } + } + + protected String getSchemaDifferencesAsJson(final SchemaDifferenceResult schemaDifferenceResult) { + final Gson gson = new GsonBuilder().setPrettyPrinting().create(); + return gson.toJson(schemaDifferenceResult); + } + + private void logMigrationContext(final MigrationContext context) { + if (context == null) { + return; + } + LOG.info("--------MIGRATION CONTEXT- START----------"); + LOG.info("isAddMissingColumnsToSchemaEnabled=" + context.isAddMissingColumnsToSchemaEnabled()); + LOG.info("isAddMissingTablesToSchemaEnabled=" + context.isAddMissingTablesToSchemaEnabled()); + LOG.info("isAuditTableMigrationEnabled=" + context.isAuditTableMigrationEnabled()); + LOG.info("isBulkCopyEnabled=" + context.isBulkCopyEnabled()); + LOG.info("isClusterMode=" + context.isClusterMode()); + LOG.info("isDeletionEnabled=" + context.isDeletionEnabled()); + LOG.info("isDisableAllIndexesEnabled=" + context.isDisableAllIndexesEnabled()); + LOG.info("isDropAllIndexesEnabled=" + context.isDropAllIndexesEnabled()); + LOG.info("isFailOnErrorEnabled=" + context.isFailOnErrorEnabled()); + LOG.info("isIncrementalModeEnabled=" + context.isIncrementalModeEnabled()); + LOG.info("isMigrationTriggeredByUpdateProcess=" + context.isMigrationTriggeredByUpdateProcess()); + LOG.info("isRemoveMissingColumnsToSchemaEnabled=" + context.isRemoveMissingColumnsToSchemaEnabled()); + LOG.info("isRemoveMissingTablesToSchemaEnabled=" + context.isRemoveMissingTablesToSchemaEnabled()); + LOG.info("isSchemaMigrationAutoTriggerEnabled=" + context.isSchemaMigrationAutoTriggerEnabled()); + LOG.info("isSchemaMigrationEnabled=" + context.isSchemaMigrationEnabled()); + LOG.info("isTruncateEnabled=" + context.isTruncateEnabled()); + LOG.info("getIncludedTables=" + context.getIncludedTables()); + LOG.info("getExcludedTables=" + context.getExcludedTables()); + LOG.info("getIncrementalTables=" + context.getIncrementalTables()); + LOG.info("getTruncateExcludedTables=" + context.getTruncateExcludedTables()); + LOG.info("getCustomTables=" + context.getCustomTables()); + LOG.info("getIncrementalTimestamp=" + context.getIncrementalTimestamp()); + LOG.info( + "Source TS Name=" + context.getDataSourceRepository().getDataSourceConfiguration().getTypeSystemName()); + LOG.info("Source TS Suffix =" + + context.getDataSourceRepository().getDataSourceConfiguration().getTypeSystemSuffix()); + LOG.info( + "Target TS Name=" + context.getDataTargetRepository().getDataSourceConfiguration().getTypeSystemName()); + LOG.info("Target TS Suffix =" + + context.getDataTargetRepository().getDataSourceConfiguration().getTypeSystemSuffix()); + + LOG.info("--------MIGRATION CONTEXT- END----------"); + } + + protected SchemaDifference computeDiff(final MigrationContext context, final DataRepository leftRepository, + final DataRepository rightRepository, final Set leftCandidates) { + logMigrationContext(context); + final SchemaDifference schemaDifference = new SchemaDifference(rightRepository.asDatabase(), + rightRepository.getDataSourceConfiguration().getTablePrefix()); + final Set leftDatabaseTables = getTables(context, leftRepository, leftCandidates); + LOG.info("LEFT Repo = " + leftRepository.getDatabaseProvider().getDbName()); + LOG.info("RIGHT Repo = " + rightRepository.getDatabaseProvider().getDbName()); + + try { + LOG.debug(" All tables in LEFT Repo " + leftRepository.getAllTableNames()); + LOG.debug(" All tables in RIGHT Repo " + rightRepository.getAllTableNames()); + } catch (final Exception e) { + LOG.error("Cannot fetch all Table Names" + e); + } + + // LOG.info(" -------------------------------"); + for (final TableCandidate leftCandidate : leftDatabaseTables) { + LOG.info(" Checking if Left Table exists --> " + leftCandidate.getFullTableName()); + final Table leftTable = leftRepository.asDatabase().findTable(leftCandidate.getFullTableName(), false); + if (leftTable == null) { + LOG.error(String.format("Table %s in DB %s cannot be found, but should exist", + leftCandidate.getFullTableName(), + leftRepository.getDataSourceConfiguration().getConnectionString())); + continue; + + // throw new RuntimeException(String.format("Table %s in DB %s + // cannot be found, but should exists", + // leftCandidate.getFullTableName(), + // leftRepository.getDataSourceConfiguration().getConnectionString())); + } + final String rightTableName = translateTableName(leftRepository, rightRepository, leftCandidate); + final Table rightTable = rightRepository.asDatabase().findTable(rightTableName, false); + if (rightTable == null) { + schemaDifference.getMissingTables().add(new TableKeyPair(leftTable.getName(), rightTableName)); + LOG.info("MISSING Table !! --> " + leftTable.getName() + " searched for " + rightTableName); + } else { + // LOG.info(" FOUND Table --> " + rightTable.getName()); + final Column[] leftTableColumns = leftTable.getColumns(); + for (final Column leftTableColumn : leftTableColumns) { + if (rightTable.findColumn(leftTableColumn.getName(), false) == null) { + LOG.info("Missing column --> " + leftTableColumn.getName() + " -->" + leftTable.getName()); + schemaDifference.getMissingColumnsInTable().put( + new TableKeyPair(leftTable.getName(), rightTable.getName()), leftTableColumn.getName()); + } + } + } + } + return schemaDifference; + } + + private String translateTableName(final DataRepository leftRepository, final DataRepository rightRepository, + final TableCandidate leftCandidate) { + String translatedTableName = rightRepository.getDataSourceConfiguration().getTablePrefix() + + leftCandidate.getBaseTableName(); + if (leftCandidate.isTypeSystemRelatedTable()) { + translatedTableName += rightRepository.getDataSourceConfiguration().getTypeSystemSuffix(); + } + // ORCALE_TEMP - START + /* + * if (!leftCandidate.getAdditionalSuffix().isEmpty() && + * translatedTableName.toLowerCase().endsWith(leftCandidate. + * getAdditionalSuffix())) { + * //System.out.println("$$Translated name ends with LP " + + * translatedTableName); return translatedTableName; } + */ + // ORCALE_TEMP - END + return translatedTableName + leftCandidate.getAdditionalSuffix(); + } + + private Set getTables(final MigrationContext context, final DataRepository repository, + final Set candidates) { + return candidates.stream().filter(c -> dataCopyTableFilter.filter(context).test(c.getCommonTableName())) + .collect(Collectors.toSet()); + } + + public void setDataCopyTableFilter(final DataCopyTableFilter dataCopyTableFilter) { + this.dataCopyTableFilter = dataCopyTableFilter; + } + + public void setDatabaseMigrationReportStorageService( + final DatabaseMigrationReportStorageService databaseMigrationReportStorageService) { + this.databaseMigrationReportStorageService = databaseMigrationReportStorageService; + } + + public void setConfigurationService(final ConfigurationService configurationService) { + this.configurationService = configurationService; + } + + public void setCopyItemProvider(final CopyItemProvider copyItemProvider) { + this.copyItemProvider = copyItemProvider; + } + + public static class SchemaDifferenceResult { + private final SchemaDifference sourceSchema; + private final SchemaDifference targetSchema; + + public SchemaDifferenceResult(final SchemaDifference sourceSchema, final SchemaDifference targetSchema) { + this.sourceSchema = sourceSchema; + this.targetSchema = targetSchema; + } + + public SchemaDifference getSourceSchema() { + return sourceSchema; + } + + public SchemaDifference getTargetSchema() { + return targetSchema; + } + + public boolean hasDifferences() { + final boolean hasMissingTargetTables = getTargetSchema().getMissingTables().size() > 0; + final boolean hasMissingColumnsInTargetTable = getTargetSchema().getMissingColumnsInTable().size() > 0; + final boolean hasMissingSourceTables = getSourceSchema().getMissingTables().size() > 0; + final boolean hasMissingColumnsInSourceTable = getSourceSchema().getMissingColumnsInTable().size() > 0; + return hasMissingTargetTables || hasMissingColumnsInTargetTable || hasMissingSourceTables + || hasMissingColumnsInSourceTable; + } + } + + class DatabaseStatus { + private Database database; + + /** + * @return the database + */ + public Database getDatabase() { + return database; + } + + /** + * @param database + * the database to set + */ + public void setDatabase(final Database database) { + this.database = database; + } + + /** + * @return the hasSchemaDiff + */ + public boolean isHasSchemaDiff() { + return hasSchemaDiff; + } + + /** + * @param hasSchemaDiff + * the hasSchemaDiff to set + */ + public void setHasSchemaDiff(final boolean hasSchemaDiff) { + this.hasSchemaDiff = hasSchemaDiff; + } + + private boolean hasSchemaDiff; + } + + public static class SchemaDifference { + + private final Database database; + private final String prefix; + + private final List missingTables = new ArrayList<>(); + private final ListMultimap missingColumnsInTable = ArrayListMultimap.create(); + + public SchemaDifference(final Database database, final String prefix) { + this.database = database; + this.prefix = prefix; + + } + + public Database getDatabase() { + return database; + } + + public String getPrefix() { + return prefix; + } + + public List getMissingTables() { + return missingTables; + } + + public ListMultimap getMissingColumnsInTable() { + return missingColumnsInTable; + } + } + + public static class TableKeyPair { + private final String leftName; + private final String rightName; + + public TableKeyPair(final String leftName, final String rightName) { + this.leftName = leftName; + this.rightName = rightName; + } + + public String getLeftName() { + return leftName; + } + + public String getRightName() { + return rightName; + } + + @Override + public boolean equals(final Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + final TableKeyPair that = (TableKeyPair) o; + return leftName.equals(that.leftName) && rightName.equals(that.rightName); + } + + @Override + public int hashCode() { + return Objects.hash(leftName, rightName); + } + } + +} \ No newline at end of file diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/PipeDatabaseMigrationCopyService.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/PipeDatabaseMigrationCopyService.java new file mode 100644 index 0000000..88b3e60 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/service/impl/PipeDatabaseMigrationCopyService.java @@ -0,0 +1,181 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.service.impl; + +import com.google.common.base.Stopwatch; +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import com.sap.cx.boosters.commercedbsync.scheduler.DatabaseCopyScheduler; +import com.sap.cx.boosters.commercedbsync.strategy.PipeWriterStrategy; +import org.apache.commons.lang3.tuple.Pair; +import com.sap.cx.boosters.commercedbsync.concurrent.DataPipe; +import com.sap.cx.boosters.commercedbsync.concurrent.DataPipeFactory; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.dataset.DataSet; +import com.sap.cx.boosters.commercedbsync.service.DatabaseCopyTaskRepository; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationCopyService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.slf4j.MDC; +import org.springframework.core.task.AsyncTaskExecutor; +import org.springframework.core.task.TaskRejectedException; +import org.springframework.util.backoff.BackOffExecution; +import org.springframework.util.backoff.ExponentialBackOff; + +import java.util.ArrayList; +import java.util.Deque; +import java.util.LinkedList; +import java.util.List; +import java.util.Objects; +import java.util.Set; +import java.util.concurrent.Callable; +import java.util.concurrent.Future; +import java.util.stream.Collectors; + +/** + * Service to start the asynchronous migration + */ +public class PipeDatabaseMigrationCopyService implements DatabaseMigrationCopyService { + private static final Logger LOG = LoggerFactory.getLogger(PipeDatabaseMigrationCopyService.class); + + private final DataPipeFactory pipeFactory; + private final PipeWriterStrategy writerStrategy; + private final AsyncTaskExecutor executor; + private final DatabaseCopyTaskRepository databaseCopyTaskRepository; + private final DatabaseCopyScheduler scheduler; + + + public PipeDatabaseMigrationCopyService(DataPipeFactory pipeFactory, PipeWriterStrategy writerStrategy, AsyncTaskExecutor executor, DatabaseCopyTaskRepository databaseCopyTaskRepository, DatabaseCopyScheduler scheduler) { + this.pipeFactory = pipeFactory; + this.writerStrategy = writerStrategy; + this.executor = executor; + this.databaseCopyTaskRepository = databaseCopyTaskRepository; + this.scheduler = scheduler; + } + + @Override + public void copyAllAsync(CopyContext context) { + Set copyItems = context.getCopyItems(); + Deque>> tasksToSchedule = generateCopyTasks(context, copyItems); + scheduleTasks(context, tasksToSchedule); + } + + /** + * Creates Tasks to copy the Data + * + * @param context + * @param copyItems + * @return + */ + private Deque>> generateCopyTasks(CopyContext context, Set copyItems) { + return copyItems.stream() + .map(item -> Pair.of(item, (Callable) () -> { + final Stopwatch timer = Stopwatch.createStarted(); + try (MDC.MDCCloseable ignored = MDC.putCloseable(CommercedbsyncConstants.MDC_PIPELINE, item.getPipelineName())) { + try { + copy(context, item); + } catch (Exception e) { + LOG.error("Failed to copy item", e); + return Boolean.FALSE; + } finally { + // ORACLE_TARGET ADDED duration in seconds + final Stopwatch endStop = timer.stop(); + silentlyUpdateCompletedState(context, item, endStop.toString(), endStop.elapsed().getSeconds()); + } + } + return Boolean.TRUE; + })).collect(Collectors.toCollection(LinkedList::new)); + } + + /** + * Performs the actual copy of an item + * + * @param copyContext + * @param item + * @throws Exception + */ + private void copy(CopyContext copyContext, CopyContext.DataCopyItem item) throws Exception { + DataPipe dataPipe = null; + try { + dataPipe = pipeFactory.create(copyContext, item); + writerStrategy.write(copyContext, dataPipe, item); + } catch (Exception e) { + if (dataPipe != null) { + dataPipe.requestAbort(e); + } + throw e; + } + } + + /** + * Adds the tasks to the executor + * + * @param context + * @param tasksToSchedule + */ + private void scheduleTasks(CopyContext context, Deque>> tasksToSchedule) { + List>> runningTasks = new ArrayList<>(); + BackOffExecution backoff = null; + CopyContext.DataCopyItem previousReject = null; + try { + while (tasksToSchedule.peekFirst() != null) { + Pair> task = tasksToSchedule.removeFirst(); + try { + runningTasks.add(Pair.of(task.getLeft(), executor.submit(task.getRight()))); + } catch (TaskRejectedException e) { + // this shouldn't really happen, the writer thread pool has an unbounded queue + // but better be safe than sorry... + tasksToSchedule.addFirst(task); + if (!Objects.equals(task.getLeft(), previousReject)) { + backoff = new ExponentialBackOff().start(); + } + previousReject = task.getLeft(); + long waitInterval = backoff.nextBackOff(); + LOG.debug("Task rejected. Retrying in {}ms...", waitInterval); + Thread.sleep(waitInterval); + } + } + } catch (Exception e) { + try { + scheduler.abort(context); + } catch (Exception exception) { + LOG.error("Could not abort migration", e); + } + for (Pair> running : runningTasks) { + if (running.getRight().cancel(true)) { + markAsCancelled(context, running.getLeft()); + } + } + for (Pair> copyTask : tasksToSchedule) { + markAsCancelled(context, copyTask.getLeft()); + } + if (e instanceof InterruptedException) { + Thread.currentThread().interrupt(); + } + } + + LOG.debug("Running Tasks" + runningTasks.size()); + } + + private void markAsCancelled(CopyContext context, CopyContext.DataCopyItem item) { + try { + databaseCopyTaskRepository.markTaskFailed(context, item, new RuntimeException("Execution cancelled")); + } catch (Exception e) { + LOG.error("Failed to set cancelled status", e); + } + } + + // ORACLE_TARGET - added durationInseconds + private void silentlyUpdateCompletedState(final CopyContext context, final CopyContext.DataCopyItem item, + final String duration, final float durationSeconds) { + try { + // ORACLE_TARGET - added durationInseconds + databaseCopyTaskRepository.markTaskCompleted(context, item, duration, durationSeconds); + } catch (final Exception e) { + LOG.error("Failed to update copy status", e); + } + } +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/setup/InitUpdateProcessTrigger.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/setup/InitUpdateProcessTrigger.java new file mode 100644 index 0000000..87d72b9 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/setup/InitUpdateProcessTrigger.java @@ -0,0 +1,55 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.setup; + +import de.hybris.platform.media.services.MediaStorageInitializer; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class InitUpdateProcessTrigger implements MediaStorageInitializer { + + private static final Logger LOG = LoggerFactory.getLogger(InitUpdateProcessTrigger.class); + + private MigrationContext migrationContext; + private DatabaseMigrationService databaseMigrationService; + private boolean failOnError = false; + + public InitUpdateProcessTrigger(MigrationContext migrationContext, DatabaseMigrationService databaseMigrationService) { + this.migrationContext = migrationContext; + this.databaseMigrationService = databaseMigrationService; + } + + @Override + public void onInitialize() { + //Do nothing + } + + @Override + public void onUpdate() { + try { + if (migrationContext.isMigrationTriggeredByUpdateProcess()) { + LOG.info("Starting data migration ..."); + String migrationId = databaseMigrationService.startMigration(migrationContext); + databaseMigrationService.waitForFinish(migrationContext, migrationId); + //note: further update activities not stopped here -> should we? + } + } catch (Exception e) { + failOnError = migrationContext.isFailOnErrorEnabled(); + if (failOnError) { + throw new Error(e); + } + } + } + + @Override + public boolean failOnInitUpdateError() { + return failOnError; + } + +} \ No newline at end of file diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/setup/MigrationSystemSetup.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/setup/MigrationSystemSetup.java new file mode 100644 index 0000000..869c1c7 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/setup/MigrationSystemSetup.java @@ -0,0 +1,53 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.setup; + +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import de.hybris.platform.core.initialization.SystemSetup; +import de.hybris.platform.core.initialization.SystemSetupContext; +import de.hybris.platform.servicelayer.config.ConfigurationService; +import org.apache.commons.lang.StringUtils; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationSynonymService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * This class provides hooks into the system's initialization and update processes. + */ +@SystemSetup(extension = CommercedbsyncConstants.EXTENSIONNAME) +public class MigrationSystemSetup { + + private static final Logger LOG = LoggerFactory.getLogger(MigrationSystemSetup.class); + + private MigrationContext migrationContext; + private ConfigurationService configurationService; + private DatabaseMigrationSynonymService databaseMigrationSynonymService; + + public MigrationSystemSetup(MigrationContext migrationContext, ConfigurationService configurationService, DatabaseMigrationSynonymService databaseMigrationSynonymService) { + this.migrationContext = migrationContext; + this.configurationService = configurationService; + this.databaseMigrationSynonymService = databaseMigrationSynonymService; + } + + /** + * CCv2 Workaround: ccv2 builder does not support prefixes yet. + * creating synonym on ydeployments -> prefix_yeployments + * creating synonym on attributedescriptors -> prefix_attributedescriptors. + * + * @param context + * @throws Exception + */ + @SystemSetup(type = SystemSetup.Type.ESSENTIAL, process = SystemSetup.Process.ALL) + public void createEssentialData(final SystemSetupContext context) throws Exception { + String actualPrefix = configurationService.getConfiguration().getString("db.tableprefix"); + if (StringUtils.isNotEmpty(actualPrefix)) { + databaseMigrationSynonymService.recreateSynonyms(migrationContext.getDataTargetRepository(), actualPrefix); + } + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/strategy/PipeWriterStrategy.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/strategy/PipeWriterStrategy.java new file mode 100644 index 0000000..8b96053 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/strategy/PipeWriterStrategy.java @@ -0,0 +1,30 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.strategy; + +import com.sap.cx.boosters.commercedbsync.concurrent.DataPipe; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; + +import javax.annotation.concurrent.ThreadSafe; + +/** + * Main Strategy to Write Data to a target Database + * + * @param + */ +@ThreadSafe +public interface PipeWriterStrategy { + /** + * Performs the actual copying of Data Items + * + * @param context + * @param pipe + * @param item + * @throws Exception + */ + void write(CopyContext context, DataPipe pipe, CopyContext.DataCopyItem item) throws Exception; +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/strategy/impl/CopyPipeWriterStrategy.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/strategy/impl/CopyPipeWriterStrategy.java new file mode 100644 index 0000000..5691b33 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/strategy/impl/CopyPipeWriterStrategy.java @@ -0,0 +1,1097 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.strategy.impl; + +import com.google.common.base.Joiner; +import com.google.common.base.Splitter; +import com.google.common.base.Stopwatch; +import com.microsoft.sqlserver.jdbc.SQLServerBulkCopy; +import com.microsoft.sqlserver.jdbc.SQLServerBulkCopyOptions; +import com.microsoft.sqlserver.jdbc.SQLServerConnection; +import com.sap.cx.boosters.commercedbsync.concurrent.DataWorkerExecutor; +import com.sap.cx.boosters.commercedbsync.concurrent.MaybeFinished; +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceCategory; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceRecorder; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceUnit; +import com.sap.cx.boosters.commercedbsync.strategy.PipeWriterStrategy; +import de.hybris.bootstrap.ddl.DataBaseProvider; + +import java.io.StringReader; +import java.util.Collections; + +import org.apache.commons.collections.MapUtils; +import org.apache.commons.lang.StringUtils; +import com.sap.cx.boosters.commercedbsync.concurrent.DataPipe; +import com.sap.cx.boosters.commercedbsync.concurrent.DataWorkerPoolFactory; +import com.sap.cx.boosters.commercedbsync.concurrent.RetriableTask; +import com.sap.cx.boosters.commercedbsync.concurrent.impl.DefaultDataWorkerExecutor; +import com.sap.cx.boosters.commercedbsync.context.CopyContext; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.dataset.DataColumn; +import com.sap.cx.boosters.commercedbsync.dataset.DataSet; +import com.sap.cx.boosters.commercedbsync.dataset.impl.DefaultDataSet; +import com.sap.cx.boosters.commercedbsync.profile.DataSourceConfiguration; +import com.sap.cx.boosters.commercedbsync.service.DatabaseCopyTaskRepository; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationDataTypeMapperService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor; + +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.ArrayList; +import java.util.List; +import java.util.Set; +import java.util.TreeSet; +import java.util.concurrent.atomic.AtomicLong; +import java.util.stream.Collectors; +import java.util.stream.IntStream; + + +public class CopyPipeWriterStrategy implements PipeWriterStrategy { + private static final Logger LOG = LoggerFactory.getLogger(CopyPipeWriterStrategy.class); + + private final DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService; + + private final DatabaseCopyTaskRepository taskRepository; + + private final DataWorkerPoolFactory dataWriteWorkerPoolFactory; + + private static final String LP_SUFFIX = "lp"; + + public CopyPipeWriterStrategy(DatabaseMigrationDataTypeMapperService databaseMigrationDataTypeMapperService, DatabaseCopyTaskRepository taskRepository, DataWorkerPoolFactory dataWriteWorkerPoolFactory) { + this.databaseMigrationDataTypeMapperService = databaseMigrationDataTypeMapperService; + this.taskRepository = taskRepository; + this.dataWriteWorkerPoolFactory = dataWriteWorkerPoolFactory; + } + + @Override + public void write(CopyContext context, DataPipe pipe, CopyContext.DataCopyItem item) throws Exception { + // ORACLE_TARGET - START + // Fetch the provider to figure out the name of the DBName + final DataBaseProvider dbProvider = context.getMigrationContext().getDataTargetRepository() + .getDatabaseProvider(); + // ORACLE_TARGET - END + String targetTableName = item.getTargetItem(); + PerformanceRecorder performanceRecorder = context.getPerformanceProfiler().createRecorder(PerformanceCategory.DB_WRITE, targetTableName); + performanceRecorder.start(); + Set excludedColumns = new TreeSet<>(String.CASE_INSENSITIVE_ORDER); + if (context.getMigrationContext().getExcludedColumns().containsKey(targetTableName)) { + excludedColumns.addAll(context.getMigrationContext().getExcludedColumns().get(targetTableName)); + LOG.info("Ignoring excluded column(s): {}", excludedColumns); + } + Set nullifyColumns = new TreeSet<>(String.CASE_INSENSITIVE_ORDER); + if (context.getMigrationContext().getNullifyColumns().containsKey(targetTableName)) { + nullifyColumns.addAll(context.getMigrationContext().getNullifyColumns().get(targetTableName)); + LOG.info("Nullify column(s): {}", nullifyColumns); + } + + List columnsToCopy = new ArrayList<>(); + try (Connection sourceConnection = context.getMigrationContext().getDataSourceRepository().getConnection(); + Statement stmt = sourceConnection.createStatement(); + ResultSet metaResult = stmt.executeQuery(String.format("select * from %s where 0 = 1", item.getSourceItem())); + ) { + ResultSetMetaData sourceMeta = metaResult.getMetaData(); + int columnCount = sourceMeta.getColumnCount(); + for (int i = 1; i <= columnCount; i++) { + String column = sourceMeta.getColumnName(i); + if (!excludedColumns.contains(column)) { + columnsToCopy.add(column); + } + } + } + + if (columnsToCopy.isEmpty()) { + throw new IllegalStateException(String.format("%s: source has no columns or all columns excluded", item.getPipelineName())); + } + ThreadPoolTaskExecutor taskExecutor = dataWriteWorkerPoolFactory.create(context); + DataWorkerExecutor workerExecutor = new DefaultDataWorkerExecutor<>(taskExecutor); + Connection targetConnection = null; + AtomicLong totalCount = new AtomicLong(0); + List upsertIds = new ArrayList<>(); + try { + targetConnection = context.getMigrationContext().getDataTargetRepository().getConnection(); + // ORACLE_TARGET - START - pass the dbProvider and dsConfiguration + // information into the requiredidentityinsert function + + boolean requiresIdentityInsert = false; + if (dbProvider.isPostgreSqlUsed()){ + // do nothing + } else { + requiresIdentityInsert = requiresIdentityInsert(item.getTargetItem(), targetConnection, + dbProvider, context.getMigrationContext().getDataTargetRepository().getDataSourceConfiguration()); + } + // ORACLE_TARGET - START - pass the dbProvider info into the + // requiredidentityinsert function + MaybeFinished sourcePage; + boolean firstPage = true; + do { + sourcePage = pipe.get(); + if (sourcePage.isPoison()) { + throw new IllegalStateException("Poison received; dying. Check the logs for further insights."); + } + DataSet dataSet = sourcePage.getValue(); + if (firstPage) { + doTruncateIfNecessary(context, item.getTargetItem()); + doTurnOnOffIndicesIfNecessary(context, item.getTargetItem(), false); + if (context.getMigrationContext().isIncrementalModeEnabled()) { + if (context.getMigrationContext().isLpTableMigrationEnabled() + && StringUtils.endsWithIgnoreCase(item.getSourceItem(),LP_SUFFIX)){ + determineLpUpsertId(upsertIds, dataSet); + } else{ + determineUpsertId(upsertIds, dataSet); + } + } + firstPage = false; + } + if (dataSet.isNotEmpty()) { + DataWriterContext dataWriterContext = new DataWriterContext(context, item, dataSet, columnsToCopy, nullifyColumns, performanceRecorder, totalCount, upsertIds, requiresIdentityInsert); + RetriableTask writerTask = createWriterTask(dataWriterContext); + workerExecutor.safelyExecute(writerTask); + } + } while (!sourcePage.isDone()); + workerExecutor.waitAndRethrowUncaughtExceptions(); + if (taskExecutor != null) { + taskExecutor.shutdown(); + } + } catch (Exception e) { + pipe.requestAbort(e); + if (e instanceof InterruptedException) { + Thread.currentThread().interrupt(); + } + throw e; + } finally { + if (targetConnection != null) { + doTurnOnOffIndicesIfNecessary(context, item.getTargetItem(), true); + targetConnection.close(); + } + updateProgress(context, item, totalCount.get()); + } + } + + private void switchIdentityInsert(Connection connection, final String tableName, boolean on) { + try (Statement stmt = connection.createStatement()) { + String onOff = on ? "ON" : "OFF"; + stmt.executeUpdate(String.format("SET IDENTITY_INSERT %s %s", tableName, onOff)); + } catch (final Exception e) { + //TODO using brute force FIX + } + } + + protected void executeBatch(CopyContext.DataCopyItem item, PreparedStatement preparedStatement, long batchCount, PerformanceRecorder recorder) throws SQLException { + final Stopwatch timer = Stopwatch.createStarted(); + preparedStatement.executeBatch(); + preparedStatement.clearBatch(); + LOG.debug("Batch written ({} items) for table '{}' in {}", batchCount, item.getTargetItem(), timer.stop().toString()); + recorder.record(PerformanceUnit.ROWS, batchCount); + } + + private void updateProgress(CopyContext context, CopyContext.DataCopyItem item, long totalCount) { + try { + taskRepository.updateTaskProgress(context, item, totalCount); + } catch (Exception e) { + LOG.warn("Could not update progress", e); + } + } + + protected void doTruncateIfNecessary(CopyContext context, String targetTableName) throws Exception { + if (context.getMigrationContext().isTruncateEnabled()) { + if (!context.getMigrationContext().getTruncateExcludedTables().contains(targetTableName)) { + assertTruncateAllowed(context, targetTableName); + context.getMigrationContext().getDataTargetRepository().truncateTable(targetTableName); + } + } + } + + protected void doTurnOnOffIndicesIfNecessary(CopyContext context, String targetTableName, boolean on) throws Exception { + if (context.getMigrationContext().isDropAllIndexesEnabled()) { + if (!on) { + LOG.debug("{} indexes for table '{}'", "Dropping", targetTableName); + context.getMigrationContext().getDataTargetRepository().dropIndexesOfTable(targetTableName); + } + } else { + if (context.getMigrationContext().isDisableAllIndexesEnabled()) { + if (!context.getMigrationContext().getDisableAllIndexesIncludedTables().isEmpty()) { + if (!context.getMigrationContext().getDisableAllIndexesIncludedTables().contains(targetTableName)) { + return; + } + } + LOG.debug("{} indexes for table '{}'", on ? "Rebuilding" : "Disabling", targetTableName); + if (on) { + context.getMigrationContext().getDataTargetRepository().enableIndexesOfTable(targetTableName); + } else { + context.getMigrationContext().getDataTargetRepository().disableIndexesOfTable(targetTableName); + } + } + } + } + + protected void assertTruncateAllowed(CopyContext context, String targetTableName) throws Exception { + if (context.getMigrationContext().isIncrementalModeEnabled()) { + throw new IllegalStateException("Truncating tables in incremental mode is illegal. Change the property " + CommercedbsyncConstants.MIGRATION_DATA_TRUNCATE_ENABLED + " to false"); + } + } + + protected boolean isColumnOverride(CopyContext context, CopyContext.DataCopyItem item, String sourceColumnName) { + return MapUtils.isNotEmpty(item.getColumnMap()) && item.getColumnMap().containsKey(sourceColumnName); + } + + protected boolean isColumnOverride(CopyContext context, CopyContext.DataCopyItem item) { + return MapUtils.isNotEmpty(item.getColumnMap()); + } + + private PreparedStatement createPreparedStatement(final CopyContext context, final String targetTableName, + final List columnsToCopy, final List upsertIds, final Connection targetConnection) + throws Exception { + if (context.getMigrationContext().isIncrementalModeEnabled()) { + if (!upsertIds.isEmpty()) { + // ORACLE_TARGET - START + String sqlBuild = ""; + if (context.getMigrationContext().getDataTargetRepository().getDatabaseProvider().isOracleUsed()) { + sqlBuild = getBulkUpsertStatementOracle(targetTableName, columnsToCopy, upsertIds.get(0)); + } else if (context.getMigrationContext().getDataTargetRepository().getDatabaseProvider().isHanaUsed()) { + sqlBuild = getBulkUpsertStatementHana(targetTableName, columnsToCopy, upsertIds); + } else if (context.getMigrationContext().getDataTargetRepository().getDatabaseProvider().isPostgreSqlUsed()) { + sqlBuild = getBulkUpsertStatementPostGres(targetTableName, columnsToCopy, upsertIds.get(0)); + } + else { + sqlBuild = getBulkUpsertStatement(targetTableName, columnsToCopy, upsertIds); + } + return targetConnection.prepareStatement(sqlBuild); + // ORACLE_TARGET - END + } else { + throw new RuntimeException( + "The incremental approach can only be used on tables that have a valid identifier like PK or ID"); + } + } else { + return targetConnection.prepareStatement(getBulkInsertStatement(targetTableName, columnsToCopy, + columnsToCopy.stream().map(column -> "?").collect(Collectors.toList()))); + } + } + + private String getBulkInsertStatement(String targetTableName, List columnsToCopy, List columnsToCopyValues) { + return "INSERT INTO " + targetTableName + " " + getBulkInsertStatementParamList(columnsToCopy, columnsToCopyValues); + } + + private String getBulkInsertStatementParamList(List columnsToCopy, List columnsToCopyValues) { + return "(" + + String.join(", ", columnsToCopy) + ") VALUES (" + + columnsToCopyValues.stream().collect(Collectors.joining(", ")) + + ")"; + } + + private String getBulkUpdateStatementParamList(List columnsToCopy, List columnsToCopyValues) { + return "SET " + IntStream.range(0, columnsToCopy.size()).mapToObj(idx -> String.format("%s = %s", columnsToCopy.get(idx), columnsToCopyValues.get(idx))).collect(Collectors.joining(", ")); + } + + // ORACLE_TARGET -- START + private String getBulkUpdateStatementParamListOracle(final List columnsToCopy, + final List columnsToCopyValues) { + + final List columnsToCopyMinusPK = columnsToCopy.stream().filter(s -> !s.equalsIgnoreCase("PK")) + .collect(Collectors.toList()); + final List columnsToCopyValuesMinusPK = columnsToCopyValues.stream() + .filter(s -> !s.equalsIgnoreCase("s.PK")).collect(Collectors.toList()); + LOG.debug("getBulkUpdateStatementParamListOracle - columnsToCopyMinusPK =" + columnsToCopyMinusPK); + return "SET " + IntStream.range(0, columnsToCopyMinusPK.size()).mapToObj( + idx -> String.format("%s = %s", columnsToCopyMinusPK.get(idx), columnsToCopyValuesMinusPK.get(idx))) + .collect(Collectors.joining(", ")); + } + // ORACLE_TARGET -- END + private void determineUpsertId(List upsertIds ,DataSet dataSet) { + if (dataSet.hasColumn("PK")) { + upsertIds.add("PK"); + return; + } else if (dataSet.hasColumn("ID")) { + upsertIds.add("ID"); + return; + } else { + //should we support more IDs? In the hybris context there is hardly any other with regards to transactional data. + return ; + } + } + + private void determineLpUpsertId(List upsertIds ,DataSet dataSet) { + if (dataSet.hasColumn("ITEMPK") + && dataSet.hasColumn("LANGPK")) { + upsertIds.add("ITEMPK"); + upsertIds.add("LANGPK"); + return; + } else{ + //should we support more IDs? In the hybris context there is hardly any other with regards to transactional data. + return; + } + } + + private String getBulkUpsertStatement(String targetTableName, List columnsToCopy, List upsertIds) { + /* + * https://michaeljswart.com/2017/07/sql-server-upsert-patterns-and-antipatterns/ + * We are not using a stored procedure here as CCv2 does not grant sp exec permission to the default db user + */ + StringBuilder sqlBuilder = new StringBuilder(); + sqlBuilder.append(String.format("MERGE %s WITH (HOLDLOCK) AS t", targetTableName)); + sqlBuilder.append("\n"); + sqlBuilder.append(String.format("USING (SELECT %s) AS s ON ", Joiner.on(',').join(columnsToCopy.stream().map(column -> "? " + column).collect(Collectors.toList())))); + sqlBuilder.append(String.format("( %s )" , upsertIds.stream().map(column -> String.format(" t.%s = s.%s",column,column)).collect(Collectors.joining(" AND ")))); + sqlBuilder.append("\n"); + sqlBuilder.append("WHEN MATCHED THEN UPDATE"); //update + sqlBuilder.append("\n"); + sqlBuilder.append(getBulkUpdateStatementParamList(columnsToCopy, columnsToCopy.stream().map(column -> "s." + column).collect(Collectors.toList()))); + sqlBuilder.append("\n"); + sqlBuilder.append("WHEN NOT MATCHED THEN INSERT"); //insert + sqlBuilder.append("\n"); + sqlBuilder.append(getBulkInsertStatementParamList(columnsToCopy, columnsToCopy.stream().map(column -> "s." + column).collect(Collectors.toList()))); + sqlBuilder.append(";"); + // ORACLE_TARGET + LOG.debug("UPSERT SQL SERVER SQl builder=" + sqlBuilder.toString()); + return sqlBuilder.toString(); + } + + // ORACLE_TARGET - START + private String getBulkUpsertStatementOracle(final String targetTableName, final List columnsToCopy, + final String columnId) { + + final StringBuilder sqlBuilder = new StringBuilder(); + sqlBuilder.append(String.format("MERGE INTO %s t", targetTableName)); + sqlBuilder.append("\n"); + sqlBuilder.append(String.format("USING (SELECT %s from dual) s ON (t.%s = s.%s)", + Joiner.on(',').join(columnsToCopy.stream().map(column -> "? " + column).collect(Collectors.toList())), + columnId, columnId)); + sqlBuilder.append("\n"); + sqlBuilder.append("WHEN MATCHED THEN UPDATE"); // update + sqlBuilder.append("\n"); + sqlBuilder.append(getBulkUpdateStatementParamListOracle(columnsToCopy, + columnsToCopy.stream().map(column -> "s." + column).collect(Collectors.toList()))); + sqlBuilder.append("\n"); + sqlBuilder.append("WHEN NOT MATCHED THEN INSERT"); // insert + sqlBuilder.append("\n"); + sqlBuilder.append(getBulkInsertStatementParamList(columnsToCopy, + columnsToCopy.stream().map(column -> "s." + column).collect(Collectors.toList()))); + // sqlBuilder.append(";"); + // ORACLE_TARGET + LOG.debug("UPSERT ORACLE SQl builder=" + sqlBuilder.toString()); + return sqlBuilder.toString(); + } + // ORACLE_TARGET - END + + private String getBulkUpsertStatementPostGres(final String targetTableName, final List columnsToCopy, + final String columnId) { + + final StringBuilder sqlBuilder = new StringBuilder(); + sqlBuilder.append(String.format("MERGE INTO %s t", targetTableName)); + sqlBuilder.append("\n"); + sqlBuilder.append(String.format("USING (SELECT %s from dual) s ON (t.%s = s.%s)", + Joiner.on(',').join(columnsToCopy.stream().map(column -> "? " + column).collect(Collectors.toList())), + columnId, columnId)); + sqlBuilder.append("\n"); + sqlBuilder.append("WHEN MATCHED THEN UPDATE"); // update + sqlBuilder.append("\n"); + sqlBuilder.append(getBulkUpdateStatementParamListOracle(columnsToCopy, + columnsToCopy.stream().map(column -> "s." + column).collect(Collectors.toList()))); + sqlBuilder.append("\n"); + sqlBuilder.append("WHEN NOT MATCHED THEN INSERT"); // insert + sqlBuilder.append("\n"); + sqlBuilder.append(getBulkInsertStatementParamList(columnsToCopy, + columnsToCopy.stream().map(column -> "s." + column).collect(Collectors.toList()))); + // sqlBuilder.append(";"); + // ORACLE_TARGET + LOG.debug("UPSERT PostGres SQl builder=" + sqlBuilder.toString()); + return sqlBuilder.toString(); + } + + private String getBulkUpsertStatementHana(final String targetTableName, final List columnsToCopy, + List upsertIds) { + final StringBuilder sqlBuilder = new StringBuilder(); + sqlBuilder.append(String.format("MERGE INTO %s t", targetTableName)); + sqlBuilder.append("\n"); + sqlBuilder.append(String.format("USING (SELECT %s from dummy) s ON ", Joiner.on(',').join(columnsToCopy.stream().map(column -> "? " + column).collect(Collectors.toList())))); + sqlBuilder.append(String.format("( %s )" , upsertIds.stream().map(column -> String.format(" t.%s = s.%s",column,column)).collect(Collectors.joining(" AND ")))); + sqlBuilder.append("\n"); + sqlBuilder.append("WHEN MATCHED THEN UPDATE"); // update + sqlBuilder.append("\n"); + sqlBuilder.append(getBulkUpdateStatementParamListOracle(columnsToCopy, + columnsToCopy.stream().map(column -> "s." + column).collect(Collectors.toList()))); + sqlBuilder.append("\n"); + sqlBuilder.append("WHEN NOT MATCHED THEN INSERT"); // insert + sqlBuilder.append("\n"); + sqlBuilder.append(getBulkInsertStatementParamList(columnsToCopy, + columnsToCopy.stream().map(column -> "s." + column).collect(Collectors.toList()))); + // sqlBuilder.append(";"); + // ORACLE_TARGET + LOG.debug("UPSERT HANA SQl builder=" + sqlBuilder.toString()); + return sqlBuilder.toString(); + } + + private String getBulkDeleteStatement(String targetTableName, String columnId) { + /* + * https://michaeljswart.com/2017/07/sql-server-upsert-patterns-and-antipatterns/ + * We are not using a stored procedure here as CCv2 does not grant sp exec permission to the default db user + */ + StringBuilder sqlBuilder = new StringBuilder(); + sqlBuilder.append(String.format("MERGE %s WITH (HOLDLOCK) AS t", targetTableName)); + sqlBuilder.append("\n"); + sqlBuilder.append(String.format("USING (SELECT %s) AS s ON t.%s = s.%s", "? " + columnId, columnId, columnId)); + sqlBuilder.append("\n"); + sqlBuilder.append("WHEN MATCHED THEN DELETE"); //DELETE + sqlBuilder.append(";"); + // ORACLE_TARGET + LOG.debug("MERGE-DELETE SQL Server " + sqlBuilder.toString()); + return sqlBuilder.toString(); + } + + // ORACLE_TARGET - START + private String getBulkDeleteStatementOracle(final String targetTableName, final String columnId) { + final StringBuilder sqlBuilder = new StringBuilder(); + sqlBuilder.append(String.format("MERGE INTO %s t", targetTableName)); + sqlBuilder.append("\n"); + // sqlBuilder.append(String.format("USING (SELECT %s , '2022-02-15 + // 10:48:49.496' modifiedTS from dual) s ON (t.%s = s.%s)", + // "? " + columnId, columnId, columnId)); + sqlBuilder.append( + String.format("USING (SELECT %s from dual) s ON (t.%s = s.%s)", "? " + columnId, columnId, columnId)); + sqlBuilder.append("\n"); + sqlBuilder.append("WHEN MATCHED THEN "); // DELETE + sqlBuilder.append("UPDATE SET t.HJMPTS = 0 "); // IS INSERT OR UPDATE + // MANDATORY, therefore + // setting a dummy + // value. Hopefully + // HJMPTS is present in + // all tables + sqlBuilder.append("DELETE WHERE " + String.format(" t.%s = s.%s ", columnId, columnId));// DELETE + // is + // OPTIONAL + // sqlBuilder.append(";"); + // ORACLE_TARGET + LOG.debug("MERGE-DELETE ORACLE " + sqlBuilder.toString()); + return sqlBuilder.toString(); + } + // ORACLE_TARGET - END + + // ORACLE_TARGET -- START Helper Function 1 + private StringBuilder buildSqlForIdentityInsertCheck(final String targetTableName, + final DataBaseProvider dbProvider, final DataSourceConfiguration dsConfig) { + final StringBuilder sqlBuilder = new StringBuilder(); + if (dbProvider.isMssqlUsed()) { + sqlBuilder.append("SELECT \n"); + sqlBuilder.append("count(*)\n"); + sqlBuilder.append("FROM sys.columns\n"); + sqlBuilder.append("WHERE\n"); + sqlBuilder.append(String.format("object_id = object_id('%s')\n", targetTableName)); + sqlBuilder.append("AND\n"); + sqlBuilder.append("is_identity = 1\n"); + sqlBuilder.append(";\n"); + } else if (dbProvider.isOracleUsed()) { + // get schema name + final String schema = dsConfig.getSchema(); + sqlBuilder.append("SELECT \n"); + sqlBuilder.append("has_identity\n"); + sqlBuilder.append("FROM dba_tables\n"); + sqlBuilder.append("WHERE\n"); + sqlBuilder.append(String.format("UPPER(table_name) = UPPER('%s')\n", targetTableName)); + sqlBuilder.append(String.format(" AND UPPER(owner) = UPPER('%s')\n", schema)); + // sqlBuilder.append(";\n"); + } else if (dbProvider.isHanaUsed()) { + // get schema name + final String schema = dsConfig.getSchema(); + sqlBuilder.append("SELECT \n"); + sqlBuilder.append("is_insert_only\n"); + sqlBuilder.append("FROM public.tables\n"); + sqlBuilder.append("WHERE\n"); + sqlBuilder.append(String.format("table_name = UPPER('%s')\n", targetTableName)); + sqlBuilder.append(String.format(" AND schema_name = UPPER('%s')\n", schema)); + // sqlBuilder.append(";\n"); + } + else { + sqlBuilder.append("SELECT \n"); + sqlBuilder.append("count(*)\n"); + sqlBuilder.append("FROM sys.columns\n"); + sqlBuilder.append("WHERE\n"); + sqlBuilder.append(String.format("object_id = object_id('%s')\n", targetTableName)); + sqlBuilder.append("AND\n"); + sqlBuilder.append("is_identity = 1\n"); + sqlBuilder.append(";\n"); + } + LOG.debug("IDENTITY check SQL -> " + sqlBuilder); + return sqlBuilder; + } + // ORACLE_TARGET -- END + + // ORACLE_TARGET -- START Helper Function 2 + private boolean checkIdentityfromResultSet(final ResultSet resultSet, final DataBaseProvider dbProvider) + throws SQLException { + boolean requiresIdentityInsert = false; + + final String dbName = dbProvider.getDbName().toLowerCase(); + if (resultSet.next()) { + if (dbProvider.isMssqlUsed()) { + requiresIdentityInsert = resultSet.getInt(1) > 0; + } else if (dbProvider.isOracleUsed()) { + requiresIdentityInsert = resultSet.getBoolean(1); + } else if (dbProvider.isHanaUsed()) { + requiresIdentityInsert = resultSet.getBoolean(1); + } else{ + requiresIdentityInsert = resultSet.getInt(1) > 0; + } + } + return requiresIdentityInsert; + + } + // ORACLE_TARGET -- END + + // ORACLE_TARGET -- START + private boolean requiresIdentityInsert(final String targetTableName, final Connection targetConnection, + final DataBaseProvider dbProvider, final DataSourceConfiguration dsConfig) { + final StringBuilder sqlBuilder = buildSqlForIdentityInsertCheck(targetTableName, dbProvider, dsConfig); + + try ( + final Statement statement = targetConnection.createStatement(); + final ResultSet resultSet = statement.executeQuery(sqlBuilder.toString()); + ){ + final boolean requiresIdentityInsert = checkIdentityfromResultSet(resultSet, dbProvider); + + return requiresIdentityInsert; + } catch (SQLException e) { + throw new RuntimeException(e); + } + } + // ORACLE_TARGET -- END + + private boolean requiresIdentityInsert(String targetTableName, Connection targetConnection) { + StringBuilder sqlBuilder = new StringBuilder(); + sqlBuilder.append("SELECT \n"); + sqlBuilder.append("count(*)\n"); + sqlBuilder.append("FROM sys.columns\n"); + sqlBuilder.append("WHERE\n"); + sqlBuilder.append(String.format("object_id = object_id('%s')\n", targetTableName)); + sqlBuilder.append("AND\n"); + sqlBuilder.append("is_identity = 1\n"); + sqlBuilder.append(";\n"); + try ( + Statement statement = targetConnection.createStatement(); + ResultSet resultSet = statement.executeQuery(sqlBuilder.toString()); + ) { + boolean requiresIdentityInsert = false; + if (resultSet.next()) { + requiresIdentityInsert = resultSet.getInt(1) > 0; + } + return requiresIdentityInsert; + } + catch (SQLException e) { + throw new RuntimeException(e); + } + catch (Exception e) { + throw new RuntimeException(e); + } + } + + private RetriableTask createWriterTask(DataWriterContext dwc) { + MigrationContext ctx = dwc.getContext().getMigrationContext(); + if(ctx.isDeletionEnabled()){ + return new DataDeleteWriterTask(dwc); + } else { + + if (!ctx.isBulkCopyEnabled()) { + return new DataWriterTask(dwc); + } else { + boolean noNullification = dwc.getNullifyColumns().isEmpty(); + boolean noIncremental = !ctx.isIncrementalModeEnabled(); + boolean noColumnOverride = !isColumnOverride(dwc.getContext(), dwc.getCopyItem()); + if (noNullification && noIncremental && noColumnOverride) { + LOG.warn("EXPERIMENTAL: Using bulk copy for {}", + dwc.getCopyItem().getTargetItem()); + return new DataBulkWriterTask(dwc); + } else { + return new DataWriterTask(dwc); + } + } + } + } + + private static class DataWriterContext { + private CopyContext context; + private CopyContext.DataCopyItem copyItem; + private DataSet dataSet; + private List columnsToCopy; + private Set nullifyColumns; + private PerformanceRecorder performanceRecorder; + private AtomicLong totalCount; + private List upsertIds; + private boolean requiresIdentityInsert; + + public DataWriterContext(CopyContext context, CopyContext.DataCopyItem copyItem, DataSet dataSet, List columnsToCopy, Set nullifyColumns, PerformanceRecorder performanceRecorder, AtomicLong totalCount, List upsertIds, boolean requiresIdentityInsert) { + this.context = context; + this.copyItem = copyItem; + this.dataSet = dataSet; + this.columnsToCopy = columnsToCopy; + this.nullifyColumns = nullifyColumns; + this.performanceRecorder = performanceRecorder; + this.totalCount = totalCount; + this.upsertIds = upsertIds; + this.requiresIdentityInsert = requiresIdentityInsert; + } + + public CopyContext getContext() { + return context; + } + + public CopyContext.DataCopyItem getCopyItem() { + return copyItem; + } + + public DataSet getDataSet() { + return dataSet; + } + + public List getColumnsToCopy() { + return columnsToCopy; + } + + public Set getNullifyColumns() { + return nullifyColumns; + } + + public PerformanceRecorder getPerformanceRecorder() { + return performanceRecorder; + } + + public AtomicLong getTotalCount() { + return totalCount; + } + + public List getUpsertId() { + return upsertIds; + } + + public boolean isRequiresIdentityInsert() { + return requiresIdentityInsert; + } + } + + private class DataWriterTask extends RetriableTask { + + private DataWriterContext ctx; + + public DataWriterTask(DataWriterContext ctx) { + super(ctx.getContext(), ctx.getCopyItem().getTargetItem()); + this.ctx = ctx; + } + + @Override + protected Boolean internalRun() { + try { + if (!ctx.getDataSet().getAllResults().isEmpty()) { + process(); + } + return Boolean.TRUE; + } catch (Exception e) { + //LOG.error("Error while executing table task " + ctx.getCopyItem().getTargetItem(),e); + throw new RuntimeException("Error processing writer task for " + ctx.getCopyItem().getTargetItem(), e); + } + } + + private void process() throws Exception { + Connection connection = null; + Boolean originalAutoCommit = null; + boolean requiresIdentityInsert = ctx.isRequiresIdentityInsert(); + try { + connection = ctx.getContext().getMigrationContext().getDataTargetRepository().getConnection(); + // ORACLE_TARGET - START Fetch the provider to figure out the + // name of the DBName + final DataBaseProvider dbProvider = ctx.getContext().getMigrationContext().getDataTargetRepository() + .getDatabaseProvider(); + LOG.debug("TARGET DB name = " + dbProvider.getDbName() + " SOURCE TABLE = " + ctx.getCopyItem().getSourceItem() + + ", TARGET Table = " + ctx.getCopyItem().getTargetItem()); + /* + * if + * (ctx.getCopyItem().getTargetItem().equalsIgnoreCase("medias") + * ) { return; } + */ + // ORACLE_TARGET - END Fetch the provider to figure out the name + // of the DBName + originalAutoCommit = connection.getAutoCommit(); + try (PreparedStatement bulkWriterStatement = createPreparedStatement(ctx.getContext(), ctx.getCopyItem().getTargetItem(), ctx.getColumnsToCopy(), ctx.getUpsertId(), connection); + Statement tempStmt = connection.createStatement(); + ResultSet tempTargetRs = tempStmt.executeQuery(String.format("select * from %s where 0 = 1", ctx.getCopyItem().getTargetItem()))) { + connection.setAutoCommit(false); + if (requiresIdentityInsert) { + switchIdentityInsert(connection, ctx.getCopyItem().getTargetItem(), true); + } + // ORACLE_TARGET - START - just to print once, helpful to + // debug issues at the time of actual copy. + boolean printed2004 = false; + boolean printed2005 = false; + final boolean printedDef = false; + // ORACLE_TARGET - END - just to print once, helpful to + // debug issues at the time of actual copy. + for (List row : ctx.getDataSet().getAllResults()) { + int sourceColumnTypeIdx = 0; + int paramIdx = 1; + for (String sourceColumnName : ctx.getColumnsToCopy()) { + int targetColumnIdx = tempTargetRs.findColumn(sourceColumnName); + DataColumn sourceColumnType = ((DefaultDataSet) ctx.getDataSet()).getColumnOrder().get(sourceColumnTypeIdx); + int targetColumnType = tempTargetRs.getMetaData().getColumnType(targetColumnIdx); + if (ctx.getNullifyColumns().contains(sourceColumnName)) { + bulkWriterStatement.setNull(paramIdx, targetColumnType); + LOG.trace("Column {} is nullified. Setting NULL value...", sourceColumnName); + } else { + if (isColumnOverride(ctx.getContext(), ctx.getCopyItem(), sourceColumnName)) { + bulkWriterStatement.setObject(paramIdx, ctx.getCopyItem().getColumnMap().get(sourceColumnName), targetColumnType); + } else { + Object sourceColumnValue = null; + if(dbProvider.isPostgreSqlUsed()){ + sourceColumnValue = ctx.getDataSet().getColumnValueForPostGres(sourceColumnName, row,sourceColumnType,targetColumnType); + } + else if(dbProvider.isHanaUsed()){ + sourceColumnValue = ((DefaultDataSet) ctx.getDataSet()).getColumnValueForHANA(sourceColumnName, row,sourceColumnType,targetColumnType); + } + else{ + sourceColumnValue = ctx.getDataSet().getColumnValue(sourceColumnName, row); + } + if (sourceColumnValue != null) { + // ##ORACLE_TARGET -- START TRY-catch to + // catch all exceptions, not to print + // each time, print one for each + // type/worker. + try { + if (! dbProvider.isOracleUsed()) { + // for all cases non-oracle + bulkWriterStatement.setObject(paramIdx, sourceColumnValue, + targetColumnType); + } else { + // if type is oracle, then there + // are a bunch of exceptions + // when the type is 2004, 2005 + // 2004 = BLOB , 2005 = CLOB + switch (targetColumnType) { + /* + * code to handle BLOB, because + * setObject throws exception + * example Products.p_buyerids + * is varbinary(max) in + * (sqlserver) AND blob in + * (oracle) + */ + // TODO Use Constant definitions + case 2004: { + // temp debug code - start + // ....only to print once.. + if (!printed2004) { + LOG.debug("BLOB 2004 sourceColumnName = " + sourceColumnName + + " souce value type CN=" + + sourceColumnValue.getClass().getCanonicalName() + + " , Name = " + sourceColumnValue.getClass().getName() + + " , Type Name = " + + sourceColumnValue.getClass().getTypeName()); + printed2004 = true; + } + // temp debug code end + bulkWriterStatement.setBytes(paramIdx, (byte[]) sourceColumnValue); + break; + + } + /* + * code to handle CLOB, because + * setObject throws exception + * example Promotion.description + * is nvarchar(max) in + * (sqlserver) AND blob in + * (oracle) + */ + case 2005: { + // temp debug code - start + // ....only to print once.. + if (!printed2005) { + LOG.debug("CLOB 2005 sourceColumnName = " + sourceColumnName + + " souce value type CN=" + + sourceColumnValue.getClass().getCanonicalName() + + " , Name = " + sourceColumnValue.getClass().getName() + + " , Type Name = " + + sourceColumnValue.getClass().getTypeName()); + printed2005 = true; + } + // temp debug code end + // CLOB or NCLOB ?? String + // -> StringReader + // bulkWriterStatement.setBytes(paramIdx, + // (byte[]) + // sourceColumnValue); + // bulkWriterStatement.setClob(paramIdx, + // (Clob) + // sourceColumnValue); + if (sourceColumnValue instanceof java.lang.String) { + final String clobString = (String) sourceColumnValue; + // typically a + // StringReader is + // enough, but exception + // occurs when the value + // is empty...therefore + // set to null + if (!clobString.isEmpty()) { + LOG.debug(" reading CLOB"); + // LOG.info("CLOB is + // not empty"); + bulkWriterStatement.setClob(paramIdx, + new StringReader((String) sourceColumnValue), + ((String) sourceColumnValue).length()); + LOG.debug(" wrote CLOB"); + } else { + LOG.debug("CLOB is empty...setting null"); + bulkWriterStatement.setNull(paramIdx, targetColumnType); + } + } + break; + } + default: { + bulkWriterStatement.setObject(paramIdx, sourceColumnValue, + targetColumnType); + break; + } + } + + } + } catch (final NumberFormatException e) { + /* + * To handle SqlServer CHAR -> + * Oracle Number. example + * Medias.p_fieldseparator + */ + LOG.error( + "NumberFormatException - Error setting Type on sourceColumnName = " + + sourceColumnName + ", sourceColumnValue = " + + sourceColumnValue + ", targetColumnType =" + + targetColumnType + ", source type = " + + sourceColumnValue.getClass().getTypeName()); + if (dbProvider.isOracleUsed()) { + if (sourceColumnValue instanceof java.lang.String) { + final char character = sourceColumnValue.toString().charAt(0); + final int ascii = character; + // 2 is NUMBER..need to use + // constants NOW + if (targetColumnType == 2) { + // bulkWriterStatement.setIn(paramIdx, + // ascii, + // targetColumnType); + bulkWriterStatement.setInt(paramIdx, ascii); + } + } + } + } catch (final Exception e) { + LOG.error("Error setting Type on sourceColumnName = " + sourceColumnName + + ", sourceColumnValue = " + sourceColumnValue + + ", targetColumnType =" + targetColumnType + ", source type = " + + sourceColumnValue.getClass().getTypeName(), e); + throw e; + } + // ##ORACLE_TARGET -- END TRY-catch temp + // to catch this BLOB Copy issue. + } else { + // for all cases oracle/sqlserver... + bulkWriterStatement.setNull(paramIdx, targetColumnType); + } + } + } + paramIdx += 1; + sourceColumnTypeIdx +=1; + } + + bulkWriterStatement.addBatch(); + } + + final int batchCount = ctx.getDataSet().getAllResults().size(); + executeBatch(ctx.getCopyItem(), bulkWriterStatement, batchCount, ctx.getPerformanceRecorder()); + bulkWriterStatement.clearParameters(); + bulkWriterStatement.clearBatch(); + connection.commit(); + // LOG.info("$$ updating progress from data wtiter task"); + final long totalCount = ctx.getTotalCount().addAndGet(batchCount); + updateProgress(ctx.getContext(), ctx.getCopyItem(), totalCount); + } + } catch (final Exception e) { + if (connection != null) { + connection.rollback(); + } + throw e; + } finally { + if (connection != null && originalAutoCommit != null) { + connection.setAutoCommit(originalAutoCommit); + } + if (connection != null && ctx != null) { + if (requiresIdentityInsert) { + switchIdentityInsert(connection, ctx.getCopyItem().getTargetItem(), false); + } + connection.close(); + } + } + } + } + + private class DataBulkWriterTask extends RetriableTask { + + private DataWriterContext ctx; + + public DataBulkWriterTask(DataWriterContext ctx) { + super(ctx.getContext(), ctx.getCopyItem().getTargetItem()); + this.ctx = ctx; + } + + @Override + protected Boolean internalRun() { + try { + if (!ctx.getDataSet().getAllResults().isEmpty()) { + process(); + } + return Boolean.TRUE; + } catch (Exception e) { + //LOG.error("Error while executing table task " + ctx.getCopyItem().getTargetItem(),e); + throw new RuntimeException("Error processing writer task for " + ctx.getCopyItem().getTargetItem(), e); + } + } + + private void process() throws Exception { + Connection connection = null; + Boolean originalAutoCommit = null; + try { + connection = ctx.getContext().getMigrationContext().getDataTargetRepository().getConnection(); + originalAutoCommit = connection.getAutoCommit(); + connection.setAutoCommit(false); + SQLServerBulkCopy bulkCopy = new SQLServerBulkCopy(connection.unwrap(SQLServerConnection.class)); + SQLServerBulkCopyOptions copyOptions = new SQLServerBulkCopyOptions(); + copyOptions.setBulkCopyTimeout(0); + copyOptions.setBatchSize(ctx.getContext().getMigrationContext().getReaderBatchSize()); + bulkCopy.setBulkCopyOptions(copyOptions); + bulkCopy.setDestinationTableName(ctx.getCopyItem().getTargetItem()); + + try (Statement tempStmt = connection.createStatement(); + ResultSet tempTargetRs = tempStmt.executeQuery(String.format("select * from %s where 0 = 1", ctx.getCopyItem().getTargetItem()))) { + for (String column : ctx.getColumnsToCopy()) { + int targetColumnIdx = tempTargetRs.findColumn(column); + bulkCopy.addColumnMapping(column, targetColumnIdx); + } + } + bulkCopy.writeToServer(ctx.getDataSet().toSQLServerBulkData()); + connection.commit(); + final Stopwatch timer = Stopwatch.createStarted(); + int bulkCount = ctx.getDataSet().getAllResults().size(); + LOG.debug("Bulk written ({} items) for table '{}' in {}", bulkCount, ctx.getCopyItem().getTargetItem(), timer.stop().toString()); + ctx.getPerformanceRecorder().record(PerformanceUnit.ROWS, bulkCount); + long totalCount = ctx.getTotalCount().addAndGet(bulkCount); + updateProgress(ctx.getContext(), ctx.getCopyItem(), totalCount); + } catch (Exception e) { + if (connection != null) { + connection.rollback(); + } + throw e; + } finally { + if (connection != null && originalAutoCommit != null) { + connection.setAutoCommit(originalAutoCommit); + } + if (connection != null && ctx != null) { + connection.close(); + } + } + } + } + + private class DataDeleteWriterTask extends RetriableTask { + + private DataWriterContext ctx; + + public DataDeleteWriterTask(DataWriterContext ctx) { + super(ctx.getContext(), ctx.getCopyItem().getTargetItem()); + this.ctx = ctx; + } + + @Override + protected Boolean internalRun() { + try { + if (!ctx.getDataSet().getAllResults().isEmpty()) { + if(ctx.getContext().getMigrationContext().isDeletionEnabled()){ + process(); + } + } + return Boolean.TRUE; + } catch (Exception e) { + //LOG.error("Error while executing table task " + ctx.getCopyItem().getTargetItem(),e); + throw new RuntimeException("Error processing writer task for " + ctx.getCopyItem().getTargetItem(), e); + } + } + + private void process() throws Exception { + Connection connection = null; + Boolean originalAutoCommit = null; + String PK = "PK"; + boolean requiresIdentityInsert = ctx.isRequiresIdentityInsert(); + try { + connection = ctx.getContext().getMigrationContext().getDataTargetRepository().getConnection(); + originalAutoCommit = connection.getAutoCommit(); + // ORACLE_TARGET - START + String sqlDelete = ""; + if ("oracle".equalsIgnoreCase(ctx.getContext().getMigrationContext().getDataTargetRepository() + .getDatabaseProvider().getDbName())) { + sqlDelete = getBulkDeleteStatementOracle(ctx.getCopyItem().getTargetItem(), PK); + } else { + sqlDelete = getBulkDeleteStatement(ctx.getCopyItem().getTargetItem(), PK); + } + // ORACLE_TARGET - END + try (PreparedStatement bulkWriterStatement = connection.prepareStatement( + getBulkDeleteStatement(ctx.getCopyItem().getTargetItem() , PK));) { + connection.setAutoCommit(false); + for (List row : ctx.getDataSet().getAllResults()) { + int paramIdx = 1; + Long pkValue = (Long) ctx.getDataSet() + .getColumnValue("p_itempk", row); + bulkWriterStatement.setObject(paramIdx, pkValue); + + paramIdx += 1; + bulkWriterStatement.addBatch(); + } + int batchCount = ctx.getDataSet().getAllResults().size(); + executeBatch(ctx.getCopyItem(), bulkWriterStatement, batchCount, ctx.getPerformanceRecorder()); + bulkWriterStatement.clearParameters(); + bulkWriterStatement.clearBatch(); + connection.commit(); + long totalCount = ctx.getTotalCount().addAndGet(batchCount); + updateProgress(ctx.getContext(), ctx.getCopyItem(), totalCount); + } + } catch (Exception e) { + if (connection != null) { + connection.rollback(); + } + throw e; + } finally { + if (connection != null && originalAutoCommit != null) { + connection.setAutoCommit(originalAutoCommit); + } + if (connection != null && ctx != null) { + if (requiresIdentityInsert) { + switchIdentityInsert(connection, ctx.getCopyItem().getTargetItem(), false); + } + connection.close(); + } + } + } + + private List getListColumn() { + final String columns = "PK"; + if (StringUtils.isEmpty(columns)) { + return Collections.emptyList(); + } + List result = Splitter.on(",") + .omitEmptyStrings() + .trimResults() + .splitToList(columns); + + return result; + } + } + +} diff --git a/commercedbsync/src/com/sap/cx/boosters/commercedbsync/utils/MaskUtil.java b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/utils/MaskUtil.java new file mode 100644 index 0000000..650e735 --- /dev/null +++ b/commercedbsync/src/com/sap/cx/boosters/commercedbsync/utils/MaskUtil.java @@ -0,0 +1,15 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsync.utils; + +public class MaskUtil { + + public static String stripJdbcPassword(final String jdbcConnectionString) { + return jdbcConnectionString.replaceFirst("password=.*?;", "password=***;"); + } + +} diff --git a/commercedbsync/src/de/hybris/platform/azure/media/AzureCloudUtils.java b/commercedbsync/src/de/hybris/platform/azure/media/AzureCloudUtils.java new file mode 100644 index 0000000..9c28745 --- /dev/null +++ b/commercedbsync/src/de/hybris/platform/azure/media/AzureCloudUtils.java @@ -0,0 +1,39 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package de.hybris.platform.azure.media; + +import de.hybris.platform.core.Registry; +import de.hybris.platform.media.storage.MediaStorageConfigService.MediaFolderConfig; +import de.hybris.platform.util.Config; +import org.apache.commons.lang.StringUtils; + +public class AzureCloudUtils { + public AzureCloudUtils() { + } + + public static String computeContainerAddress(MediaFolderConfig config) { + String configuredContainer = config.getParameter("containerAddress"); + String addressSuffix = StringUtils.isNotBlank(configuredContainer) ? configuredContainer : config.getFolderQualifier(); + String addressPrefix = getTenantPrefix(); + return toValidContainerName(addressPrefix + "-" + addressSuffix); + } + + private static String toValidContainerName(String name) { + return name.toLowerCase().replaceAll("[/. !?]", "").replace('_', '-'); + } + + private static String toValidPrefixName(String name) { + return name.toLowerCase().replaceAll("[/. !?_-]", ""); + } + + private static String getTenantPrefix() { + //return "sys-" + Registry.getCurrentTenantNoFallback().getTenantID().toLowerCase(); + String defaultPrefix = Registry.getCurrentTenantNoFallback().getTenantID(); + String prefix = toValidPrefixName(Config.getString("db.tableprefix", defaultPrefix)); + return "sys-" + prefix.toLowerCase(); + } +} diff --git a/commercedbsync/src/de/hybris/platform/core/TenantPropertiesLoader.java b/commercedbsync/src/de/hybris/platform/core/TenantPropertiesLoader.java new file mode 100644 index 0000000..81a8bbe --- /dev/null +++ b/commercedbsync/src/de/hybris/platform/core/TenantPropertiesLoader.java @@ -0,0 +1,30 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package de.hybris.platform.core; + +import de.hybris.bootstrap.ddl.PropertiesLoader; + +import java.util.Objects; + + +public class TenantPropertiesLoader implements PropertiesLoader { + private final Tenant tenant; + + public TenantPropertiesLoader(final Tenant tenant) { + Objects.requireNonNull(tenant); + this.tenant = tenant; + } + + @Override + public String getProperty(final String key) { + return tenant.getConfig().getParameter(key); + } + + @Override + public String getProperty(final String key, final String defaultValue) { + return tenant.getConfig().getString(key, defaultValue); + } +} \ No newline at end of file diff --git a/commercedbsync/velocity.log b/commercedbsync/velocity.log new file mode 100644 index 0000000..e69de29 diff --git a/commercedbsynchac/.classpath b/commercedbsynchac/.classpath new file mode 100644 index 0000000..bf29992 --- /dev/null +++ b/commercedbsynchac/.classpath @@ -0,0 +1,14 @@ + + + + + + + + + + + + + + diff --git a/commercedbsynchac/.externalToolBuilders/HybrisCodeGeneration.launch b/commercedbsynchac/.externalToolBuilders/HybrisCodeGeneration.launch new file mode 100644 index 0000000..4ab552e --- /dev/null +++ b/commercedbsynchac/.externalToolBuilders/HybrisCodeGeneration.launch @@ -0,0 +1,23 @@ + + + + + + + + + + + + + + + + + + + + + + + diff --git a/commercedbsynchac/.springBeans b/commercedbsynchac/.springBeans new file mode 100644 index 0000000..fa78869 --- /dev/null +++ b/commercedbsynchac/.springBeans @@ -0,0 +1,16 @@ + + + 1 + + + + + + + resources/commercedbsynchac-spring.xml + + + + + + diff --git a/commercedbsynchac/buildcallbacks.xml b/commercedbsynchac/buildcallbacks.xml new file mode 100644 index 0000000..1750cbc --- /dev/null +++ b/commercedbsynchac/buildcallbacks.xml @@ -0,0 +1,18 @@ + + + + + + + + + + + + + + diff --git a/commercedbsynchac/extensioninfo.xml b/commercedbsynchac/extensioninfo.xml new file mode 100644 index 0000000..f8e32d5 --- /dev/null +++ b/commercedbsynchac/extensioninfo.xml @@ -0,0 +1,24 @@ + + + + + + + + + + + + + + + + + + + diff --git a/commercedbsynchac/external-dependencies.xml b/commercedbsynchac/external-dependencies.xml new file mode 100644 index 0000000..4d5a2e2 --- /dev/null +++ b/commercedbsynchac/external-dependencies.xml @@ -0,0 +1,17 @@ + + + 4.0.0 + de.hybris.platform + commercedbsynchac + 6.7.0.0-RC19 + + jar + + + + diff --git a/commercedbsynchac/hac/resources/jsp/dataCopy.jsp b/commercedbsynchac/hac/resources/jsp/dataCopy.jsp new file mode 100644 index 0000000..c206e8b --- /dev/null +++ b/commercedbsynchac/hac/resources/jsp/dataCopy.jsp @@ -0,0 +1,109 @@ +<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %> +<%@ taglib prefix="hac" uri="/WEB-INF/custom.tld" %> +<%-- + ~ Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + ~ License: Apache-2.0 + ~ + --%> + + + + Migrate Data To SAP Commerce Cloud + " type="text/css" + media="screen, projection"/> + "/> + + + + + + + + +
+
+

Data Migration

+ + + Incremental mode is enabled. Only rows changed after ${incrementalTimestamp} for specified tables will be copied. + + +
+ + +
+
+
+
+
Source Typesystem
+
${srcTsName}
+
Target Typesystem
+
${tgtTsName}
+
+
+
+
+
Source Table Prefix
+
${srcPrefix}
+
Target (Migration) Table Prefix
+
${tgtMigPrefix}
+
Target (Running System) Table Prefix
+
${tgtActualPrefix}
+
+
+
+
+
"> +
+
ID
+
N/A
+
Status
+
N/A
+
+
+
+
+
Total
+
N/A
+
Finished
+
N/A
+
Failed
+
N/A
+
+
+
+
+
Start
+
N/A
+
End
+
N/A
+
Duration
+
N/A
+
+
+
+
+
"> + + +
+
+ +
+

Migration Log

+
+

Migration not started.

+
+
+
+
+
" /> +
+
+ + + diff --git a/commercedbsynchac/hac/resources/jsp/dataSource.jsp b/commercedbsynchac/hac/resources/jsp/dataSource.jsp new file mode 100644 index 0000000..557a21f --- /dev/null +++ b/commercedbsynchac/hac/resources/jsp/dataSource.jsp @@ -0,0 +1,70 @@ +<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %> +<%@ taglib prefix="hac" uri="/WEB-INF/custom.tld" %> +<%-- + ~ Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + ~ License: Apache-2.0 + ~ + --%> + + + + Migrate Data To SAP Commerce Cloud + " type="text/css" media="screen, projection" /> + " type="text/css" media="screen, projection" /> + + + + + + +
+ +
+

Data Migration

+
+ + +
+
+ "> + + + + + + + + + +
PropertyValue
+
+ +
+
+
+ "> + + + + + + + + + +
PropertyValue
+
+ +
+
+
+
+ + + + + + diff --git a/commercedbsynchac/hac/resources/jsp/migrationReports.jsp b/commercedbsynchac/hac/resources/jsp/migrationReports.jsp new file mode 100644 index 0000000..191684b --- /dev/null +++ b/commercedbsynchac/hac/resources/jsp/migrationReports.jsp @@ -0,0 +1,46 @@ +<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %> +<%@ taglib prefix="hac" uri="/WEB-INF/custom.tld" %> +<%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> +<%-- + ~ Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + ~ License: Apache-2.0 + ~ + --%> + + + + Copy Schema To SAP Commerce Cloud + " type="text/css" media="screen, projection"/> + " type="text/css" + media="screen, projection"/> + " type="text/css" + media="screen, projection"/> + + + + + + + +
+ +
+

Migration Reports

+
+ + + + + + + + + + + +
Report idTimestampDownload
+
+
+
+ + diff --git a/commercedbsynchac/hac/resources/jsp/schemaCopy.jsp b/commercedbsynchac/hac/resources/jsp/schemaCopy.jsp new file mode 100644 index 0000000..82fad77 --- /dev/null +++ b/commercedbsynchac/hac/resources/jsp/schemaCopy.jsp @@ -0,0 +1,106 @@ +<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %> +<%@ taglib prefix="hac" uri="/WEB-INF/custom.tld" %> +<%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> +<%-- + ~ Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + ~ License: Apache-2.0 + ~ + --%> + + + + Copy Schema To SAP Commerce Cloud + " type="text/css" media="screen, projection" /> + " type="text/css" media="screen, projection" /> + " type="text/css" media="screen, projection" /> + " type="text/css" media="screen, projection" /> + + + + + + + + + +
+ +
+

Schema Migration

+
+ +
+ +
+

Target Schema

+

Target Schema is missing the following elements which are present in Source Schema

+ + + + + + + + + + +
Missing TableMissing Column
+

Source Schema

+

Source Schema is missing the following elements which are present in Target Schema

+ + + + + + + + + + +
Missing TableMissing Column
+
+
+
+
+

Schema Migration Configuration

+ +
+ checked="checked" > + +
+
+
+ +
+

Generate SQL Script

+ +
+
+ " alt="spinner"/> +
+ +
+ +
+
+

Execute SQL Script

+ + After the script generation check that your schema differences are correctly reflected by the SQL statements. + The checks may include completeness of 'add' and 'drop' statements as well as the corresponding data types. + Once verified, accept the box below and execute the script. The changes will only affect the target database. + + + +
+
+
+
+
+
+ + \ No newline at end of file diff --git a/commercedbsynchac/hac/resources/static/css/dataCopy.css b/commercedbsynchac/hac/resources/static/css/dataCopy.css new file mode 100644 index 0000000..e5e9f10 --- /dev/null +++ b/commercedbsynchac/hac/resources/static/css/dataCopy.css @@ -0,0 +1,71 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +.placeholder { + color: dimgrey; +} + +.status dd { + font-size: 1rem; +} + +.completed { + color: green; +} + +.failed { + color: red; +} + +#copySummary .total, #copySummary .failed, #copySummary .completed { + font-weight: bold; +} + +#copyStatus .completed { + color: green; + text-transform: uppercase; + font-weight: bolder; +} + +#copyStatus .failed { + text-transform: uppercase; + font-weight: bolder; +} + +#copyLogContainer { + height: 600px; + overflow: auto; + font-family: monospace; + font-size: 1rem; + background-color: #FAFAFF; + padding: 1rem; + margin: 1rem 1rem 1rem 0; + border: 1px grey dashed; + border-radius: 3px; +} + +#copyLogContainer p + p { + text-indent: 0; +} + +#copyLogContainer .failed { + font-weight: bold; + font-size: 1.02em +} + +#copyLogContainer .completed { + font-weight: bold; + font-size: 1.02em +} + +button[disabled] { + cursor: default; + opacity: 0.5; +} + +button.control-button { + float:left; +} diff --git a/commercedbsynchac/hac/resources/static/css/database.css b/commercedbsynchac/hac/resources/static/css/database.css new file mode 100644 index 0000000..8c8565f --- /dev/null +++ b/commercedbsynchac/hac/resources/static/css/database.css @@ -0,0 +1,70 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +.nobox { + border: 0; + width: 100%; +} + +.textarea { + height: auto; +} + +#spinner { + margin: 100px auto; + opacity: 0.5; +} + +.spinner { + opacity: 0.5; +} + +#spinnerWrapper,#loggingSpinnerWrapper { + text-align: center; +} + +#tableWrapper { + display: none; + padding-bottom: 2em; +} + +#tableCopySchemaWrapper { + display: none; + padding-bottom: 2em; +} + +#tableCopyDataWrapper { + display: none; + padding-bottom: 2em; +} + +#loggingContentWrapper,#downloadLog,#slider-size,#downloadForm,#analyzeResults { + display: none; +} + +#loggingContentWrapper { + margin-bottom: 3em; +} + +#dataSourceInfos legend { + color: #005BBC; + font-size: 16px; +} + +.floatLeft { + float: left; +} +#copyStatusContainer { + +} + +#copyStatusContainer dd { + font-size: 1.2em; +} + +.progress { + font-weight: bolder; +} diff --git a/commercedbsynchac/hac/resources/static/css/schemaCopy.css b/commercedbsynchac/hac/resources/static/css/schemaCopy.css new file mode 100644 index 0000000..be85139 --- /dev/null +++ b/commercedbsynchac/hac/resources/static/css/schemaCopy.css @@ -0,0 +1,52 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +.CodeMirror { + height: 100%; +} + +.CodeMirror-line-numbers { + background-color: lightgray; + border-right: 1px solid #eee; + min-width: 2em; + height: 100%; + color: gray; + text-align: right; + padding: .4em .2em .4em .4em; + font-family: "Consolas", "Monaco", "Bitstream Vera Sans Mono", "Courier New", Courier, monospace !important; +} + +.border { + background-color: #FAFAFF; + border: 1px solid darkgray; +} + +.textarea-container { + position: relative; + border: 1px dashed #666 !important; +} + +textarea { + width: 100%; +} + +#spinnerWrapper { + display: none; + margin-top: 100px; + width: 100%; + position: absolute; + z-index: 1000; + text-align: center; +} + +textarea { + width: 100%; +} + +button[disabled] { + cursor: default; + opacity: 0.5; +} \ No newline at end of file diff --git a/commercedbsynchac/hac/resources/static/css/table.css b/commercedbsynchac/hac/resources/static/css/table.css new file mode 100644 index 0000000..e6fb3fd --- /dev/null +++ b/commercedbsynchac/hac/resources/static/css/table.css @@ -0,0 +1 @@ +table.dataTable{width:100%;margin:0 auto;clear:both;border-collapse:separate;border-spacing:0}table.dataTable thead th,table.dataTable tfoot th{font-weight:bold}table.dataTable thead th,table.dataTable thead td{padding:10px 18px;border-bottom:1px solid #111}table.dataTable thead th:active,table.dataTable thead td:active{outline:none}table.dataTable tfoot th,table.dataTable tfoot td{padding:10px 18px 6px 18px;border-top:1px solid #111}table.dataTable thead .sorting,table.dataTable thead .sorting_asc,table.dataTable thead .sorting_desc,table.dataTable thead .sorting_asc_disabled,table.dataTable thead .sorting_desc_disabled{cursor:pointer;*cursor:hand}table.dataTable thead .sorting,table.dataTable thead .sorting_asc,table.dataTable thead .sorting_desc,table.dataTable thead .sorting_asc_disabled,table.dataTable thead .sorting_desc_disabled{background-repeat:no-repeat;background-position:center right}table.dataTable thead .sorting{background-image:url("../images/sort_both.png")}table.dataTable thead .sorting_asc{background-image:url("../images/sort_asc.png")}table.dataTable thead .sorting_desc{background-image:url("../images/sort_desc.png")}table.dataTable thead .sorting_asc_disabled{background-image:url("../images/sort_asc_disabled.png")}table.dataTable thead .sorting_desc_disabled{background-image:url("../images/sort_desc_disabled.png")}table.dataTable tbody tr{background-color:#ffffff}table.dataTable tbody tr.selected{background-color:#B0BED9}table.dataTable tbody th,table.dataTable tbody td{padding:8px 10px}table.dataTable.row-border tbody th,table.dataTable.row-border tbody td,table.dataTable.display tbody th,table.dataTable.display tbody td{border-top:1px solid #ddd}table.dataTable.row-border tbody tr:first-child th,table.dataTable.row-border tbody tr:first-child td,table.dataTable.display tbody tr:first-child th,table.dataTable.display tbody tr:first-child td{border-top:none}table.dataTable.cell-border tbody th,table.dataTable.cell-border tbody td{border-top:1px solid #ddd;border-right:1px solid #ddd}table.dataTable.cell-border tbody tr th:first-child,table.dataTable.cell-border tbody tr td:first-child{border-left:1px solid #ddd}table.dataTable.cell-border tbody tr:first-child th,table.dataTable.cell-border tbody tr:first-child td{border-top:none}table.dataTable.stripe tbody tr.odd,table.dataTable.display tbody tr.odd{background-color:#f9f9f9}table.dataTable.stripe tbody tr.odd.selected,table.dataTable.display tbody tr.odd.selected{background-color:#acbad4}table.dataTable.hover tbody tr:hover,table.dataTable.display tbody tr:hover{background-color:#f6f6f6}table.dataTable.hover tbody tr:hover.selected,table.dataTable.display tbody tr:hover.selected{background-color:#aab7d1}table.dataTable.order-column tbody tr>.sorting_1,table.dataTable.order-column tbody tr>.sorting_2,table.dataTable.order-column tbody tr>.sorting_3,table.dataTable.display tbody tr>.sorting_1,table.dataTable.display tbody tr>.sorting_2,table.dataTable.display tbody tr>.sorting_3{background-color:#fafafa}table.dataTable.order-column tbody tr.selected>.sorting_1,table.dataTable.order-column tbody tr.selected>.sorting_2,table.dataTable.order-column tbody tr.selected>.sorting_3,table.dataTable.display tbody tr.selected>.sorting_1,table.dataTable.display tbody tr.selected>.sorting_2,table.dataTable.display tbody tr.selected>.sorting_3{background-color:#acbad5}table.dataTable.display tbody tr.odd>.sorting_1,table.dataTable.order-column.stripe tbody tr.odd>.sorting_1{background-color:#f1f1f1}table.dataTable.display tbody tr.odd>.sorting_2,table.dataTable.order-column.stripe tbody tr.odd>.sorting_2{background-color:#f3f3f3}table.dataTable.display tbody tr.odd>.sorting_3,table.dataTable.order-column.stripe tbody tr.odd>.sorting_3{background-color:whitesmoke}table.dataTable.display tbody tr.odd.selected>.sorting_1,table.dataTable.order-column.stripe tbody tr.odd.selected>.sorting_1{background-color:#a6b4cd}table.dataTable.display tbody tr.odd.selected>.sorting_2,table.dataTable.order-column.stripe tbody tr.odd.selected>.sorting_2{background-color:#a8b5cf}table.dataTable.display tbody tr.odd.selected>.sorting_3,table.dataTable.order-column.stripe tbody tr.odd.selected>.sorting_3{background-color:#a9b7d1}table.dataTable.display tbody tr.even>.sorting_1,table.dataTable.order-column.stripe tbody tr.even>.sorting_1{background-color:#fafafa}table.dataTable.display tbody tr.even>.sorting_2,table.dataTable.order-column.stripe tbody tr.even>.sorting_2{background-color:#fcfcfc}table.dataTable.display tbody tr.even>.sorting_3,table.dataTable.order-column.stripe tbody tr.even>.sorting_3{background-color:#fefefe}table.dataTable.display tbody tr.even.selected>.sorting_1,table.dataTable.order-column.stripe tbody tr.even.selected>.sorting_1{background-color:#acbad5}table.dataTable.display tbody tr.even.selected>.sorting_2,table.dataTable.order-column.stripe tbody tr.even.selected>.sorting_2{background-color:#aebcd6}table.dataTable.display tbody tr.even.selected>.sorting_3,table.dataTable.order-column.stripe tbody tr.even.selected>.sorting_3{background-color:#afbdd8}table.dataTable.display tbody tr:hover>.sorting_1,table.dataTable.order-column.hover tbody tr:hover>.sorting_1{background-color:#eaeaea}table.dataTable.display tbody tr:hover>.sorting_2,table.dataTable.order-column.hover tbody tr:hover>.sorting_2{background-color:#ececec}table.dataTable.display tbody tr:hover>.sorting_3,table.dataTable.order-column.hover tbody tr:hover>.sorting_3{background-color:#efefef}table.dataTable.display tbody tr:hover.selected>.sorting_1,table.dataTable.order-column.hover tbody tr:hover.selected>.sorting_1{background-color:#a2aec7}table.dataTable.display tbody tr:hover.selected>.sorting_2,table.dataTable.order-column.hover tbody tr:hover.selected>.sorting_2{background-color:#a3b0c9}table.dataTable.display tbody tr:hover.selected>.sorting_3,table.dataTable.order-column.hover tbody tr:hover.selected>.sorting_3{background-color:#a5b2cb}table.dataTable.no-footer{border-bottom:1px solid #111}table.dataTable.nowrap th,table.dataTable.nowrap td{white-space:nowrap}table.dataTable.compact thead th,table.dataTable.compact thead td{padding:4px 17px 4px 4px}table.dataTable.compact tfoot th,table.dataTable.compact tfoot td{padding:4px}table.dataTable.compact tbody th,table.dataTable.compact tbody td{padding:4px}table.dataTable th.dt-left,table.dataTable td.dt-left{text-align:left}table.dataTable th.dt-center,table.dataTable td.dt-center,table.dataTable td.dataTables_empty{text-align:center}table.dataTable th.dt-right,table.dataTable td.dt-right{text-align:right}table.dataTable th.dt-justify,table.dataTable td.dt-justify{text-align:justify}table.dataTable th.dt-nowrap,table.dataTable td.dt-nowrap{white-space:nowrap}table.dataTable thead th.dt-head-left,table.dataTable thead td.dt-head-left,table.dataTable tfoot th.dt-head-left,table.dataTable tfoot td.dt-head-left{text-align:left}table.dataTable thead th.dt-head-center,table.dataTable thead td.dt-head-center,table.dataTable tfoot th.dt-head-center,table.dataTable tfoot td.dt-head-center{text-align:center}table.dataTable thead th.dt-head-right,table.dataTable thead td.dt-head-right,table.dataTable tfoot th.dt-head-right,table.dataTable tfoot td.dt-head-right{text-align:right}table.dataTable thead th.dt-head-justify,table.dataTable thead td.dt-head-justify,table.dataTable tfoot th.dt-head-justify,table.dataTable tfoot td.dt-head-justify{text-align:justify}table.dataTable thead th.dt-head-nowrap,table.dataTable thead td.dt-head-nowrap,table.dataTable tfoot th.dt-head-nowrap,table.dataTable tfoot td.dt-head-nowrap{white-space:nowrap}table.dataTable tbody th.dt-body-left,table.dataTable tbody td.dt-body-left{text-align:left}table.dataTable tbody th.dt-body-center,table.dataTable tbody td.dt-body-center{text-align:center}table.dataTable tbody th.dt-body-right,table.dataTable tbody td.dt-body-right{text-align:right}table.dataTable tbody th.dt-body-justify,table.dataTable tbody td.dt-body-justify{text-align:justify}table.dataTable tbody th.dt-body-nowrap,table.dataTable tbody td.dt-body-nowrap{white-space:nowrap}table.dataTable,table.dataTable th,table.dataTable td{-webkit-box-sizing:content-box;box-sizing:content-box}.dataTables_wrapper{position:relative;clear:both;*zoom:1;zoom:1}.dataTables_wrapper .dataTables_length{float:left}.dataTables_wrapper .dataTables_filter{float:right;text-align:right}.dataTables_wrapper .dataTables_filter input{margin-left:0.5em}.dataTables_wrapper .dataTables_info{clear:both;float:left;padding-top:0.755em}.dataTables_wrapper .dataTables_paginate{float:right;text-align:right;padding-top:0.25em}.dataTables_wrapper .dataTables_paginate .paginate_button{box-sizing:border-box;display:inline-block;min-width:1.5em;padding:0.5em 1em;margin-left:2px;text-align:center;text-decoration:none !important;cursor:pointer;*cursor:hand;color:#333 !important;border:1px solid transparent;border-radius:2px}.dataTables_wrapper .dataTables_paginate .paginate_button.current,.dataTables_wrapper .dataTables_paginate .paginate_button.current:hover{color:#333 !important;border:1px solid #979797;background-color:white;background:-webkit-gradient(linear, left top, left bottom, color-stop(0%, #fff), color-stop(100%, #dcdcdc));background:-webkit-linear-gradient(top, #fff 0%, #dcdcdc 100%);background:-moz-linear-gradient(top, #fff 0%, #dcdcdc 100%);background:-ms-linear-gradient(top, #fff 0%, #dcdcdc 100%);background:-o-linear-gradient(top, #fff 0%, #dcdcdc 100%);background:linear-gradient(to bottom, #fff 0%, #dcdcdc 100%)}.dataTables_wrapper .dataTables_paginate .paginate_button.disabled,.dataTables_wrapper .dataTables_paginate .paginate_button.disabled:hover,.dataTables_wrapper .dataTables_paginate .paginate_button.disabled:active{cursor:default;color:#666 !important;border:1px solid transparent;background:transparent;box-shadow:none}.dataTables_wrapper .dataTables_paginate .paginate_button:hover{color:white !important;border:1px solid #111;background-color:#585858;background:-webkit-gradient(linear, left top, left bottom, color-stop(0%, #585858), color-stop(100%, #111));background:-webkit-linear-gradient(top, #585858 0%, #111 100%);background:-moz-linear-gradient(top, #585858 0%, #111 100%);background:-ms-linear-gradient(top, #585858 0%, #111 100%);background:-o-linear-gradient(top, #585858 0%, #111 100%);background:linear-gradient(to bottom, #585858 0%, #111 100%)}.dataTables_wrapper .dataTables_paginate .paginate_button:active{outline:none;background-color:#2b2b2b;background:-webkit-gradient(linear, left top, left bottom, color-stop(0%, #2b2b2b), color-stop(100%, #0c0c0c));background:-webkit-linear-gradient(top, #2b2b2b 0%, #0c0c0c 100%);background:-moz-linear-gradient(top, #2b2b2b 0%, #0c0c0c 100%);background:-ms-linear-gradient(top, #2b2b2b 0%, #0c0c0c 100%);background:-o-linear-gradient(top, #2b2b2b 0%, #0c0c0c 100%);background:linear-gradient(to bottom, #2b2b2b 0%, #0c0c0c 100%);box-shadow:inset 0 0 3px #111}.dataTables_wrapper .dataTables_paginate .ellipsis{padding:0 1em}.dataTables_wrapper .dataTables_processing{position:absolute;top:50%;left:50%;width:100%;height:40px;margin-left:-50%;margin-top:-25px;padding-top:20px;text-align:center;font-size:1.2em;background-color:white;background:-webkit-gradient(linear, left top, right top, color-stop(0%, rgba(255,255,255,0)), color-stop(25%, rgba(255,255,255,0.9)), color-stop(75%, rgba(255,255,255,0.9)), color-stop(100%, rgba(255,255,255,0)));background:-webkit-linear-gradient(left, rgba(255,255,255,0) 0%, rgba(255,255,255,0.9) 25%, rgba(255,255,255,0.9) 75%, rgba(255,255,255,0) 100%);background:-moz-linear-gradient(left, rgba(255,255,255,0) 0%, rgba(255,255,255,0.9) 25%, rgba(255,255,255,0.9) 75%, rgba(255,255,255,0) 100%);background:-ms-linear-gradient(left, rgba(255,255,255,0) 0%, rgba(255,255,255,0.9) 25%, rgba(255,255,255,0.9) 75%, rgba(255,255,255,0) 100%);background:-o-linear-gradient(left, rgba(255,255,255,0) 0%, rgba(255,255,255,0.9) 25%, rgba(255,255,255,0.9) 75%, rgba(255,255,255,0) 100%);background:linear-gradient(to right, rgba(255,255,255,0) 0%, rgba(255,255,255,0.9) 25%, rgba(255,255,255,0.9) 75%, rgba(255,255,255,0) 100%)}.dataTables_wrapper .dataTables_length,.dataTables_wrapper .dataTables_filter,.dataTables_wrapper .dataTables_info,.dataTables_wrapper .dataTables_processing,.dataTables_wrapper .dataTables_paginate{color:#333}.dataTables_wrapper .dataTables_scroll{clear:both}.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody{*margin-top:-1px;-webkit-overflow-scrolling:touch}.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody>table>thead>tr>th,.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody>table>thead>tr>td,.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody>table>tbody>tr>th,.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody>table>tbody>tr>td{vertical-align:middle}.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody>table>thead>tr>th>div.dataTables_sizing,.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody>table>thead>tr>td>div.dataTables_sizing,.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody>table>tbody>tr>th>div.dataTables_sizing,.dataTables_wrapper .dataTables_scroll div.dataTables_scrollBody>table>tbody>tr>td>div.dataTables_sizing{height:0;overflow:hidden;margin:0 !important;padding:0 !important}.dataTables_wrapper.no-footer .dataTables_scrollBody{border-bottom:1px solid #111}.dataTables_wrapper.no-footer div.dataTables_scrollHead>table,.dataTables_wrapper.no-footer div.dataTables_scrollBody>table{border-bottom:none}.dataTables_wrapper:after{visibility:hidden;display:block;content:"";clear:both;height:0}@media screen and (max-width: 767px){.dataTables_wrapper .dataTables_info,.dataTables_wrapper .dataTables_paginate{float:none;text-align:center}.dataTables_wrapper .dataTables_paginate{margin-top:0.5em}}@media screen and (max-width: 640px){.dataTables_wrapper .dataTables_length,.dataTables_wrapper .dataTables_filter{float:none;text-align:center}.dataTables_wrapper .dataTables_filter{margin-top:0.5em}} \ No newline at end of file diff --git a/commercedbsynchac/hac/resources/static/js/customStatistics.js b/commercedbsynchac/hac/resources/static/js/customStatistics.js new file mode 100644 index 0000000..859886d --- /dev/null +++ b/commercedbsynchac/hac/resources/static/js/customStatistics.js @@ -0,0 +1,11 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +$(function() { + $('#statistics').dataTable({ + "iDisplayLength" : 50 + }); +}) \ No newline at end of file diff --git a/commercedbsynchac/hac/resources/static/js/dataCopy.js b/commercedbsynchac/hac/resources/static/js/dataCopy.js new file mode 100644 index 0000000..3b6a75e --- /dev/null +++ b/commercedbsynchac/hac/resources/static/js/dataCopy.js @@ -0,0 +1,271 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +'use strict'; +(function () { + function setupMigration() { + const startButton = document.getElementById("buttonCopyData") + const stopButton = document.getElementById("buttonStopCopyData") + const startUrl = startButton.dataset.url; + const stopUrl = stopButton.dataset.url; + const statusContainer = document.getElementById('copyStatus'); + const summaryContainer = document.getElementById('copySummary'); + const timeContainer = document.getElementById('copyTime'); + const statusUrl = statusContainer.dataset.url; + const logContainer = document.getElementById("copyLogContainer"); + const reportButton = document.getElementById("buttonCopyReport") + const reportForm = document.getElementById("formCopyReport") + const token = document.querySelector('meta[name="_csrf"]').content; + const switchPrefixButton = document.getElementById("buttonSwitchPrefix") + let lastUpdateTime = Date.UTC(1970, 0, 1, 0, 0, 0); + let pollInterval; + let startButtonContentBefore; + let currentMigrationID; + + startButton.disabled = true; + startButton.addEventListener('click', copyData); + stopButton.disabled = true; + stopButton.addEventListener('click', stopCopy); + switchPrefixButton.disabled = true; + switchPrefixButton.addEventListener('click', switchPrefix); + + resumeRunning(); + + function empty(element) { + while (element.firstChild) { + element.removeChild(element.lastChild); + } + } + function formatEpoch(epoch) { + if (epoch) { + return new Date(epoch).toISOString(); + } else { + return "N/A"; + } + } + function formatDuration(startEpoch, endEpoch) { + if(!startEpoch || !endEpoch) { + return "N/A"; + } else { + let sec_num = (endEpoch - startEpoch) / 1000; + let hours = Math.floor(sec_num / 3600); + let minutes = Math.floor((sec_num - (hours * 3600)) / 60); + let seconds = sec_num - (hours * 3600) - (minutes * 60); + if (hours < 10) {hours = "0"+hours;} + if (minutes < 10) {minutes = "0"+minutes;} + if (seconds < 10) {seconds = "0"+seconds;} + return hours+':'+minutes+':'+seconds; + } + + } + + function switchPrefix() { + let switchButtonContentBefore = switchPrefixButton.innerHTML; + switchPrefixButton.innerHTML = switchButtonContentBefore + ' ' + hac.global.getSpinnerImg(); + $.ajax({ + url: switchPrefixButton.dataset.url, + type: 'PUT', + headers: { + 'Accept': 'application/json', + 'X-CSRF-TOKEN': token + }, + success: function (data) { + switchPrefixButton.innerHTML = switchButtonContentBefore; + }, + error: hac.global.err + }); + } + + function resumeRunning() { + $.ajax({ + url: '/hac/commercedbsynchac/resumeRunning', + type: 'GET', + headers: { + 'Accept': 'application/json', + 'X-CSRF-TOKEN': token + }, + success: function (data) { + if(data && data.status === 'RUNNING') { + startButtonContentBefore = startButton.innerHTML; + startButton.innerHTML = startButtonContentBefore + ' ' + hac.global.getSpinnerImg(); + startButton.disabled = true; + reportButton.disabled = true; + stopButton.disabled = false; + currentMigrationID = data.migrationID; + empty(logContainer); + updateStatus(data); + doPoll(); + pollInterval = setInterval(doPoll, 5000); + } else { + startButton.disabled = false; + } + }, + error: function (data) { + startButton.disabled = false; + } + }); + } + + function copyData() { + startButtonContentBefore = startButton.innerHTML; + startButton.innerHTML = startButtonContentBefore + ' ' + hac.global.getSpinnerImg(); + startButton.disabled = true; + reportButton.disabled = true; + stopButton.disabled = false; + $.ajax({ + url: startUrl, + type: 'PUT', + headers: { + 'Accept': 'application/json', + 'X-CSRF-TOKEN': token + }, + success: function (data) { + currentMigrationID = data.migrationID; + empty(logContainer); + updateStatus(data); + doPoll(); + pollInterval = setInterval(doPoll, 5000); + }, + error: function(xht, textStatus, ex) { + hac.global.error("Data migration process failed, please check the logs"); + + stopButton.disabled = true; + startButton.innerHTML = startButtonContentBefore; + startButton.disabled = false; + } + }); + } + + function stopCopy() { + stopButton.disabled = true; + startButton.innerHTML = startButtonContentBefore; + startButton.disabled = false; + $.ajax({ + url: stopUrl, + type: 'PUT', + data: currentMigrationID, + headers: { + 'Accept': 'application/json', + 'X-CSRF-TOKEN': token + }, + success: function (data) { + }, + error: hac.global.err + }); + } + + function updateStatus(status) { + const statusSummary = document.createElement('dl'); + statusSummary.classList.add("summary"); + let dt = document.createElement('dt') + let dd = document.createElement('dd') + dt.innerText = "ID"; + statusSummary.appendChild(dt); + dd.innerText = status.migrationID; + statusSummary.appendChild(dd); + dt = document.createElement("dt"); + dt.innerText = "Status"; + statusSummary.appendChild(dt); + dd = document.createElement("dd"); + dd.classList.add('status'); + statusSummary.appendChild(dd); + if (status.failed) { + dd.innerText = "Failed"; + dd.classList.add("failed"); + } else if (status.completed) { + dd.innerText = "Completed"; + dd.classList.add("completed") + } else { + dd.innerHTML = `In Progress...
(last update: ${formatEpoch(status.lastUpdateEpoch)})` + } + empty(statusContainer); + statusContainer.appendChild(statusSummary); + + const progressSummary = document.createElement("dl"); + progressSummary.classList.add("progress"); + progressSummary.innerHTML = + `
Total
${status.totalTasks}
` + + `
Completed
${status.completedTasks}
` + + `
Failed
${status.failedTasks}
`; + empty(summaryContainer); + summaryContainer.appendChild(progressSummary); + + const timeSummary = document.createElement("dl"); + timeSummary.innerHTML = + `
Start
${formatEpoch(status.startEpoch)}
` + + `
End
${formatEpoch(status.endEpoch)}
` + + `
Duration
${formatDuration(status.startEpoch, status.endEpoch)}
`; + empty(timeContainer); + timeContainer.appendChild(timeSummary); + } + + function doPoll() { + console.log(new Date(lastUpdateTime).toISOString()); + $.ajax({ + url: statusUrl, + type: 'GET', + data: { + migrationID: currentMigrationID, + since: lastUpdateTime + }, + headers: { + 'Accept': 'application/json', + 'X-CSRF-TOKEN': token + }, + success: function (status) { + // Sticky scroll: https://stackoverflow.com/a/21067431 + // allow 1px inaccuracy by adding 1 + const isScrolledToBottom = logContainer.scrollHeight - logContainer.clientHeight <= logContainer.scrollTop + 1 + writeLogs(status.statusUpdates); + if (isScrolledToBottom) { + logContainer.scrollTop = logContainer.scrollHeight - logContainer.clientHeight + } + updateStatus(status); + if (status.completed || status.failed) { + startButton.innerHTML = startButtonContentBefore + startButton.disabled = false; + stopButton.disabled = true; + $(reportForm).children('input[name=migrationId]').val(currentMigrationID); + reportButton.disabled = false; + clearInterval(pollInterval); + } + }, + error: function(xhr, status, error) { + console.error('Could not get status data'); + } + }); + lastUpdateTime = Date.now(); + } + + function writeLogs(statusUpdates) { + statusUpdates.forEach(function (entry) { + let message = `${formatEpoch(entry.lastUpdateEpoch)} | ${entry.pipelinename} | ${entry.targetrowcount} / ${entry.sourcerowcount} | `; + let p = document.createElement("p"); + if (entry.failure) { + message += `FAILED! Reason: ${entry.error}`; + p.classList.add("failed"); + }else if (entry.completed) { + message += `Completed in ${entry.duration}`; + p.classList.add("completed"); + } else { + message += "In progress..." + } + p.textContent = message; + logContainer.appendChild(p); + }); + } + } + + function domReady(fn) { + document.addEventListener("DOMContentLoaded", fn); + if (document.readyState === "interactive" || document.readyState === "complete") { + fn(); + } + } + + domReady(setupMigration); +})(); + diff --git a/commercedbsynchac/hac/resources/static/js/dataSource.js b/commercedbsynchac/hac/resources/static/js/dataSource.js new file mode 100644 index 0000000..ac21ddb --- /dev/null +++ b/commercedbsynchac/hac/resources/static/js/dataSource.js @@ -0,0 +1,165 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +var tableDsSource; +var tableDsTarget; +var buttonDsSourceValidate = "Validate Connection"; +var buttonDsTargetValidate = "Validate Connection"; + +$(document).ready(function() { + + tableDsSource = $('#tableDsSource').dataTable({ + "bStateSave": true, + "bAutoWidth": false, + "aLengthMenu" : [[10,25,50,100,-1], [10,25,50,100,'all']] + }); + + tableDsTarget = $('#tableDsTarget').dataTable({ + "bStateSave": true, + "bAutoWidth": false, + "aLengthMenu" : [[10,25,50,100,-1], [10,25,50,100,'all']] + }); + + loadSource(); + loadTarget(); + + $( "#tabs" ).tabs({ + activate: function(event, ui) { + if ( ui.newPanel.attr('id') == 'tabs-1') { + + } + if ( ui.newPanel.attr('id') == 'tabs-2') { + + } + + //toggleActiveSidebar(ui.newPanel.attr('id').replace(/^.*-/, '')); + } + }); + + $('#buttonDsSourceValidate').click(validateSource); + $('#buttonDsTargetValidate').click(validateTarget); + + + +}); + +function validateSource() +{ + $('#buttonDsSourceValidate').html(buttonDsSourceValidate + ' ' + hac.global.getSpinnerImg()); + var token = $("meta[name='_csrf']").attr("content"); + + var url = $('#buttonDsSourceValidate').attr('data-url'); + + $.ajax({ + url:url, + type:'GET', + headers:{ + 'Accept':'application/json', + 'X-CSRF-TOKEN' : token + }, + success: function(data) { + debug.log(data); + if(data.valid === true) { + $('#buttonDsSourceValidate').html("Valid!"); + } else { + $('#buttonDsSourceValidate').html("Not valid!!"); + } + }, + error: hac.global.err + }); +} + +function validateTarget() +{ + $('#buttonDsTargetValidate').html(buttonDsTargetValidate + ' ' + hac.global.getSpinnerImg()); + var token = $("meta[name='_csrf']").attr("content"); + + var url = $('#buttonDsTargetValidate').attr('data-url'); + + $.ajax({ + url:url, + type:'GET', + headers:{ + 'Accept':'application/json', + 'X-CSRF-TOKEN' : token + }, + success: function(data) { + debug.log(data); + if(data.valid === true) { + $('#buttonDsTargetValidate').html("Valid!"); + } else { + $('#buttonDsTargetValidate').html("Not valid!!"); + } + }, + error: hac.global.err + }); +} + +function loadSource() +{ + $('#tableDsSourceWrapper').fadeOut(); + tableDsSource.fnClearTable(); + + //$('#buttonCopyData').html(buttonCopyData + ' ' + hac.global.getSpinnerImg()); + var token = $("meta[name='_csrf']").attr("content"); + + var url = $('#tableDsSource').attr('data-url'); + + $.ajax({ + url:url, + type:'GET', + headers:{ + 'Accept':'application/json', + 'X-CSRF-TOKEN' : token + }, + success: function(data) { + debug.log(data); + tableDsSource.fnAddData(["profile",data.profile]); + tableDsSource.fnAddData(["driver",data.driver]); + tableDsSource.fnAddData(["connectionString",data.connectionString]); + tableDsSource.fnAddData(["userName",data.userName]); + tableDsSource.fnAddData(["password",data.password]); + tableDsSource.fnAddData(["schema",data.schema]); + tableDsSource.fnAddData(["catalog",data.catalog]); + + $("#tableDsSourceWrapper").fadeIn(); + }, + error: hac.global.err + }); +} + +function loadTarget() +{ + $('#tableDsTargetWrapper').fadeOut(); + tableDsTarget.fnClearTable(); + + //$('#buttonCopyData').html(buttonCopyData + ' ' + hac.global.getSpinnerImg()); + var token = $("meta[name='_csrf']").attr("content"); + + var url = $('#tableDsTarget').attr('data-url'); + + $.ajax({ + url:url, + type:'GET', + headers:{ + 'Accept':'application/json', + 'X-CSRF-TOKEN' : token + }, + success: function(data) { + debug.log(data); + tableDsTarget.fnAddData(["profile",data.profile]); + tableDsTarget.fnAddData(["driver",data.driver]); + tableDsTarget.fnAddData(["connectionString",data.connectionString]); + tableDsTarget.fnAddData(["userName",data.userName]); + tableDsTarget.fnAddData(["password",data.password]); + tableDsTarget.fnAddData(["schema",data.schema]); + tableDsTarget.fnAddData(["catalog",data.catalog]); + + $("#tableDsTargetWrapper").fadeIn(); + }, + error: hac.global.err + }); +} \ No newline at end of file diff --git a/commercedbsynchac/hac/resources/static/js/migrationMetrics.js b/commercedbsynchac/hac/resources/static/js/migrationMetrics.js new file mode 100644 index 0000000..f0f3055 --- /dev/null +++ b/commercedbsynchac/hac/resources/static/js/migrationMetrics.js @@ -0,0 +1,135 @@ + +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +let charts = {}; + +$(document).ready(function () { + getData(); + window.setInterval(getData, 5000) +}); + + +function getData() { + var token = $("meta[name='_csrf']").attr("content"); + var url = $('#charts').attr('data-chartDataUrl'); + $.ajax({ + url: url, + type: 'GET', + headers: { + 'Accept': 'application/json', + 'X-CSRF-TOKEN': token + }, + success: function (data) { + data.forEach(function(metric) { + if($("#" + getChartId(metric)).length == 0) { + createContainer(metric); + createChart(metric); + } + drawChart(metric); + }); + }, + error: function(xhr, status, error) { + console.error('Could not get metric data'); + } + }); + +} + +function createChart(metric) { + chartId = getChartId(metric); + chart = new Chart($('#' + chartId), { + type: 'doughnut', + data: { + datasets: [{ + data: [], + backgroundColor: [], + label: metric.name + }], + labels: [] + }, + options: { + legend: { + display: false, + }, + tooltips: { + callbacks: { + label: function (tooltipItem, data) { + var dataset = data.datasets[tooltipItem.datasetIndex]; + var meta = dataset._meta[Object.keys(dataset._meta)[0]]; + var total = meta.total; + var currentValue = dataset.data[tooltipItem.index]; + var percentage = parseFloat((currentValue / total * 100).toFixed(1)); + return currentValue + ' (' + percentage + '%)'; + }, + title: function (tooltipItem, data) { + return data.labels[tooltipItem[0].index]; + } + }, + backgroundColor: "rgb(255,255,255,0.95)", + titleFontColor: "rgb(0,0,0)", + bodyFontColor: "rgb(0,0,0)" + }, + options: { + responsive: true, + maintainAspectRatio: false, + } + } + }); + charts[chartId] = chart; +} + + +function createContainer(metric) { + size = '100px'; + root = $('#charts'); + wrapper = $('
').css('margin-bottom','10px'); + root.append(wrapper); + title = $('

').text(metric.name); + canvasContainer = $('
').attr('id', getChartId(metric)+'-container').css('text-align','center').css('width',size).css('height',size); + wrapper.append(title); + wrapper.append(canvasContainer); + canvas = $('').attr('id', getChartId(metric)).attr('width', size).attr('height', size); + canvasContainer.append(canvas); +} + +function drawChart(metric) { + //debug.log(metric.primaryValue + " / " + metric.secondaryValue); + chart = charts[getChartId(metric)]; + chart.data.datasets[0].data = [metric.secondaryValue, metric.primaryValue]; + primaryLabel = metric.primaryValueLabel + ' (' + metric.primaryValueUnit + ')'; + secondaryLabel = metric.secondaryValueLabel + ' (' + metric.secondaryValueUnit + ')'; + chart.data.labels = [secondaryLabel, primaryLabel]; + chart.options.tooltips.enabled = true; + + primaryColor = metric.primaryValueStandardColor; + secondaryColor = metric.secondaryValueStandardColor; + if(metric.primaryValue < 0) { + primaryColor = '#9a9fa6'; + secondaryColor = primaryColor; + chart.options.tooltips.enabled = false; + } else { + if(metric.primaryValueThreshold > 0) { + if(metric.primaryValue >= metric.primaryValueThreshold) { + primaryColor = metric.primaryValueCriticalColor; + } + } + if(metric.secondaryValueThreshold > 0) { + if(metric.secondaryValue >= metric.secondaryValueThreshold) { + secondaryColor = metric.secondaryValueCriticalColor; + } + } + } + + chart.data.datasets[0].backgroundColor = [secondaryColor, primaryColor]; + chart.update(); +} + +function getChartId(metric) { + return 'chart-area-' + metric.metricId; +} + + diff --git a/commercedbsynchac/hac/resources/static/js/migrationReports.js b/commercedbsynchac/hac/resources/static/js/migrationReports.js new file mode 100644 index 0000000..956160b --- /dev/null +++ b/commercedbsynchac/hac/resources/static/js/migrationReports.js @@ -0,0 +1,66 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +var reportsTable; + +$(document).ready(function () { + reportsTable = $('#reportsTable').dataTable({ + "bStateSave": true, + "bAutoWidth": false, + "aLengthMenu": [[10, 25, 50, 100, -1], [10, 25, 50, 100, 'all']] + }); + loadMigrationReports(); +}); + +function loadMigrationReports() { + $('#logsWrapper').fadeOut(); + reportsTable.fnClearTable(); + var token = $("meta[name='_csrf']").attr("content"); + var url = "/hac/commercedbsynchac/loadMigrationReports"; + $.ajax({ + url: url, + type: 'GET', + headers: { + 'Accept': 'application/json', + 'X-CSRF-TOKEN': token + }, + success: function (data) { + if (data.length > 0) { + data.forEach((report) => { + let strippedMigrationId = report.reportId; + reportsTable.fnAddData([ + strippedMigrationId, + report.modifiedTimestamp, + '' + ]) + }); + } + }, + error: hac.global.err + }); +} + +function downloadReport(migrationId) { + var token = $("meta[name='_csrf']").attr("content"); + var url = "/hac/commercedbsynchac/downloadLogsReport?migrationId="+migrationId; + $.ajax({ + url: url, + type: 'GET', + headers: { + 'X-CSRF-TOKEN': token + }, + success: function (data) { + debug.log(data); + var blob = new Blob([data], {type: "text/plain"}); + var link = document.createElement("a"); + link.href = window.URL.createObjectURL(blob); + link.download = migrationId; + link.click(); + }, + error: hac.global.err + }); +} + diff --git a/commercedbsynchac/hac/resources/static/js/schemaCopy.js b/commercedbsynchac/hac/resources/static/js/schemaCopy.js new file mode 100644 index 0000000..4cc22c3 --- /dev/null +++ b/commercedbsynchac/hac/resources/static/js/schemaCopy.js @@ -0,0 +1,161 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +var targetSchemaDiffTable; +var sourceSchemaDiffTable; +var buttonMigrateSchemaPreview = "Calculating Diff Preview"; +var buttonMigrateSchema = "Execute script"; +var buttonGenerateSchemaScript = "Generate Schema Script"; +var sqlQueryEditor; + +$(document).ready(function() { + $( "#tabs" ).tabs({ + activate: function(event, ui) { + } + }); + + + $("#sqlQueryWrapper").resizable().height("250px").width("100%"); + sqlQueryEditor = CodeMirror.fromTextArea(document.getElementById("sqlQuery"), { + mode: "text/x-sql", + lineNumbers: false, + lineWrapping: true, + autofocus: true + }); + + $('#buttonGenerateSchemaScript').click(generateSchemaScript); + $('#buttonMigrateSchemaPreview').click(migrateSchemaPreview); + $('#buttonMigrateSchema').prop('disabled', true); + + $('#checkboxAccept').change(function() { + if($(this).is(":checked")) { + $('#buttonMigrateSchema').prop('disabled', false); + } else { + $('#buttonMigrateSchema').prop('disabled', true); + } + }); + + // tab 1 + targetSchemaDiffTable = $('#targetSchemaDiffTable').dataTable({ + "bStateSave": true, + "bAutoWidth": false, + "aLengthMenu" : [[10,25,50,100,-1], [10,25,50,100,'all']] + }); + sourceSchemaDiffTable = $('#sourceSchemaDiffTable').dataTable({ + "bStateSave": true, + "bAutoWidth": false, + "aLengthMenu" : [[10,25,50,100,-1], [10,25,50,100,'all']] + }); + + // We do not want to submit form using standard http + $("#schemaSqlForm").submit(function() { + return false; + }); + + $('#schemaSqlForm').validate({ + submitHandler: migrateSchema + }); + +}); + +function migrateSchemaPreview() +{ + $('#schemaDiffWrapper').fadeOut(); + targetSchemaDiffTable.fnClearTable(); + sourceSchemaDiffTable.fnClearTable(); + + $('#buttonMigrateSchemaPreview').html(buttonMigrateSchemaPreview + ' ' + hac.global.getSpinnerImg()); + var token = $("meta[name='_csrf']").attr("content"); + + var url = $('#buttonMigrateSchemaPreview').attr('data-url'); + + $.ajax({ + url:url, + type:'GET', + headers:{ + 'Accept':'application/json', + 'X-CSRF-TOKEN' : token + }, + success: function(data) { + debug.log(data); + + $('#buttonMigrateSchemaPreview').html(buttonMigrateSchemaPreview); + + if(data.target.results.length > 0) { + targetSchemaDiffTable.fnAddData(data.target.results); + } + if(data.source.results.length > 0) { + sourceSchemaDiffTable.fnAddData(data.source.results); + } + + $("#schemaDiffWrapper").fadeIn(); + + }, + error: hac.global.err + }); +} + +function generateSchemaScript() +{ + $('#buttonGenerateSchemaScript').html(buttonGenerateSchemaScript + ' ' + hac.global.getSpinnerImg()); + $("#checkboxAccept").prop("checked", false); + $('#buttonMigrateSchema').prop('disabled', true); + + var token = $("meta[name='_csrf']").attr("content"); + var url = $('#buttonGenerateSchemaScript').attr('data-url'); + + $.ajax({ + url:url, + type:'GET', + headers:{ + 'Accept':'text/plain', + 'X-CSRF-TOKEN' : token + }, + success: function(data) { + hac.global.notify('Duplicate tables may have been found. Please review generated schema script carefully.'); + sqlQueryEditor.setValue(data); + $('#buttonGenerateSchemaScript').html(buttonGenerateSchemaScript); + }, + error: hac.global.err + }); +} + +function migrateSchema() +{ + if(sqlQueryEditor.getValue().length <= 1){ + return false; + } + $('#buttonMigrateSchema').html(buttonMigrateSchema + ' ' + hac.global.getSpinnerImg()); + $('#spinnerWrapper').show(); + var token = $("meta[name='_csrf']").attr("content"); + + var url = $('#buttonMigrateSchema').attr('data-url'); + + // Prepare data object + var dataObject = { + sqlQuery : sqlQueryEditor.getValue(), + accepted : $('#checkboxAccept').is(":checked") + }; + + $.ajax({ + url:url, + type:'POST', + data: dataObject, + headers:{ + 'Accept':'text/plain', + 'X-CSRF-TOKEN' : token + }, + success: function(data) { + $('#spinnerWrapper').hide(); + $('#buttonMigrateSchema').html(buttonMigrateSchema); + sqlQueryEditor.setValue(data); + }, + error: hac.global.err + }); + +} + + diff --git a/commercedbsynchac/hac/src/de/hybris/platform/hac/controller/CommercemigrationhacController.java b/commercedbsynchac/hac/src/de/hybris/platform/hac/controller/CommercemigrationhacController.java new file mode 100644 index 0000000..070d7d9 --- /dev/null +++ b/commercedbsynchac/hac/src/de/hybris/platform/hac/controller/CommercemigrationhacController.java @@ -0,0 +1,442 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package de.hybris.platform.hac.controller; + +import com.google.common.base.Joiner; +import com.google.gson.Gson; +import com.google.gson.GsonBuilder; +import com.microsoft.azure.storage.blob.CloudBlockBlob; +import de.hybris.platform.commercedbsynchac.data.*; +import de.hybris.platform.servicelayer.config.ConfigurationService; +import de.hybris.platform.servicelayer.user.UserService; +import org.apache.commons.lang.BooleanUtils; +import org.apache.commons.lang.StringUtils; +import org.apache.commons.lang.exception.ExceptionUtils; +import org.apache.logging.log4j.util.Strings; +import com.sap.cx.boosters.commercedbsync.MigrationStatus; +import com.sap.cx.boosters.commercedbsync.constants.CommercedbsyncConstants; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.repository.DataRepository; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationService; +import com.sap.cx.boosters.commercedbsync.service.DatabaseMigrationSynonymService; +import com.sap.cx.boosters.commercedbsync.service.DatabaseSchemaDifferenceService; +import com.sap.cx.boosters.commercedbsync.service.impl.BlobDatabaseMigrationReportStorageService; +import com.sap.cx.boosters.commercedbsync.service.impl.DefaultDatabaseSchemaDifferenceService; +import com.sap.cx.boosters.commercedbsync.utils.MaskUtil; +import com.sap.cx.boosters.commercedbsynchac.metric.MetricService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.http.HttpHeaders; +import org.springframework.http.HttpStatus; +import org.springframework.http.MediaType; +import org.springframework.http.ResponseEntity; +import org.springframework.stereotype.Controller; +import org.springframework.ui.Model; +import org.springframework.web.bind.annotation.*; + +import javax.servlet.http.HttpServletResponse; +import java.io.ByteArrayInputStream; +import java.io.InputStream; +import java.nio.charset.StandardCharsets; +import java.text.SimpleDateFormat; +import java.time.Instant; +import java.time.LocalDateTime; +import java.time.OffsetDateTime; +import java.time.ZoneOffset; +import java.util.*; +import java.util.stream.Collectors; + + +/** + * + */ +@Controller +@RequestMapping("/commercedbsynchac/**") +public class CommercemigrationhacController { + + public static final String DEFAULT_EMPTY_VAL = "[NOT SET]"; + private static final Logger LOG = LoggerFactory.getLogger(CommercemigrationhacController.class); + private static final SimpleDateFormat DATE_TIME_FORMATTER = new SimpleDateFormat("YYYY-MM-dd HH:mm", Locale.ENGLISH); + + @Autowired + private UserService userService; + + @Autowired + private DatabaseMigrationService databaseMigrationService; + + @Autowired + private DatabaseSchemaDifferenceService databaseSchemaDifferenceService; + + @Autowired + private ConfigurationService configurationService; + + @Autowired + private MigrationContext migrationContext; + + @Autowired + private DatabaseMigrationSynonymService databaseMigrationSynonymService; + + @Autowired + private MetricService metricService; + + @Autowired + BlobDatabaseMigrationReportStorageService blobDatabaseMigrationReportStorageService; + + + private String currentMigrationId; + + @RequestMapping(value = + {"/migrationSchema"}, method = + {org.springframework.web.bind.annotation.RequestMethod.GET}) + public String schema(final Model model) { + logAction("Schema migration tab clicked"); + // ORACLE_TARGET -- start + migrationContext.refreshSelf(); + // ORACLE_TARGET -- END + model.addAttribute("wikiJdbcLogging", "some notes on database"); + model.addAttribute("wikiDatabase", "some more note on supported features"); + Map schemaSettings = new HashMap<>(); + schemaSettings.put(CommercedbsyncConstants.MIGRATION_SCHEMA_TARGET_COLUMNS_ADD_ENABLED, migrationContext.isAddMissingColumnsToSchemaEnabled()); + schemaSettings.put(CommercedbsyncConstants.MIGRATION_SCHEMA_TARGET_TABLES_REMOVE_ENABLED, migrationContext.isRemoveMissingTablesToSchemaEnabled()); + schemaSettings.put(CommercedbsyncConstants.MIGRATION_SCHEMA_TARGET_TABLES_ADD_ENABLED, migrationContext.isAddMissingTablesToSchemaEnabled()); + schemaSettings.put(CommercedbsyncConstants.MIGRATION_SCHEMA_TARGET_COLUMNS_REMOVE_ENABLED, migrationContext.isRemoveMissingColumnsToSchemaEnabled()); + model.addAttribute("schemaSettings", schemaSettings); + model.addAttribute("schemaMigrationDisabled", !migrationContext.isSchemaMigrationEnabled()); + model.addAttribute("schemaSqlForm", new SchemaSqlFormData()); + return "schemaCopy"; + } + + @RequestMapping(value = + {"/migrationData"}, method = + {org.springframework.web.bind.annotation.RequestMethod.GET}) + public String data(final Model model) { + logAction("Data migration tab clicked"); + // ORACLE_TARGET -- start + migrationContext.refreshSelf(); + model.addAttribute("isIncremental", migrationContext.isIncrementalModeEnabled()); + Instant timestamp = migrationContext.getIncrementalTimestamp(); + model.addAttribute("incrementalTimestamp", timestamp == null ? DEFAULT_EMPTY_VAL : timestamp); + model.addAttribute("srcTsName", StringUtils.defaultIfEmpty(migrationContext.getDataSourceRepository().getDataSourceConfiguration().getTypeSystemName(), DEFAULT_EMPTY_VAL)); + model.addAttribute("tgtTsName", StringUtils.defaultIfEmpty(migrationContext.getDataTargetRepository().getDataSourceConfiguration().getTypeSystemName(), DEFAULT_EMPTY_VAL)); + model.addAttribute("srcPrefix", StringUtils.defaultIfEmpty(migrationContext.getDataSourceRepository().getDataSourceConfiguration().getTablePrefix(), DEFAULT_EMPTY_VAL)); + model.addAttribute("tgtMigPrefix", StringUtils.defaultIfEmpty(migrationContext.getDataTargetRepository().getDataSourceConfiguration().getTablePrefix(), DEFAULT_EMPTY_VAL)); + model.addAttribute("tgtActualPrefix", StringUtils.defaultIfEmpty(configurationService.getConfiguration().getString("db.tableprefix"), DEFAULT_EMPTY_VAL)); + return "dataCopy"; + } + + @RequestMapping(value = {"/migrationDataSource"}, method = {org.springframework.web.bind.annotation.RequestMethod.GET}) + public String dataSource(final Model model) { + logAction("Data sources tab clicked"); + model.addAttribute("wikiJdbcLogging", "some notes on database"); + model.addAttribute("wikiDatabase", "some more note on supported features"); + return "dataSource"; + } + + @RequestMapping(value = {"/migrationDataSource/{profile}"}, method = {org.springframework.web.bind.annotation.RequestMethod.GET}) + @ResponseBody + public DataSourceConfigurationData dataSourceInfo(final Model model, @PathVariable String profile) { + model.addAttribute("wikiJdbcLogging", "some notes on database"); + model.addAttribute("wikiDatabase", "some more note on supported features"); + final DataRepository dataRepository = getDataRepository(profile); + DataSourceConfigurationData dataSourceConfigurationData = null; + + if (dataRepository != null) { + dataSourceConfigurationData = new DataSourceConfigurationData(); + dataSourceConfigurationData.setProfile(dataRepository.getDataSourceConfiguration().getProfile()); + dataSourceConfigurationData.setDriver(dataRepository.getDataSourceConfiguration().getDriver()); + dataSourceConfigurationData.setConnectionString(MaskUtil.stripJdbcPassword(dataRepository.getDataSourceConfiguration().getConnectionString())); + dataSourceConfigurationData.setUserName(dataRepository.getDataSourceConfiguration().getUserName()); + dataSourceConfigurationData.setPassword(dataRepository.getDataSourceConfiguration().getPassword().replaceAll(".*", "*")); + dataSourceConfigurationData.setCatalog(dataRepository.getDataSourceConfiguration().getCatalog()); + dataSourceConfigurationData.setSchema(dataRepository.getDataSourceConfiguration().getSchema()); + dataSourceConfigurationData.setMaxActive(dataRepository.getDataSourceConfiguration().getMaxActive()); + dataSourceConfigurationData.setMaxIdle(dataRepository.getDataSourceConfiguration().getMaxIdle()); + dataSourceConfigurationData.setMinIdle(dataRepository.getDataSourceConfiguration().getMinIdle()); + dataSourceConfigurationData.setRemoveAbandoned(dataRepository.getDataSourceConfiguration().isRemoveAbandoned()); + } + + return dataSourceConfigurationData; + } + + @RequestMapping(value = + {"/migrationDataSource/{profile}/validate"}, method = + {org.springframework.web.bind.annotation.RequestMethod.GET}) + @ResponseBody + public DataSourceValidationResultData dataSourceValidation(final Model model, @PathVariable String profile) { + logAction("Validate connections button clicked"); + model.addAttribute("wikiJdbcLogging", "some notes on database"); + model.addAttribute("wikiDatabase", "some more note on supported features"); + + DataSourceValidationResultData dataSourceValidationResultData = new DataSourceValidationResultData(); + + try { + DataRepository dataRepository = getDataRepository(profile); + if (dataRepository != null) { + dataSourceValidationResultData.setValid(dataRepository.validateConnection()); + } else { + dataSourceValidationResultData.setValid(false); + } + } catch (Exception e) { + e.printStackTrace(); + dataSourceValidationResultData.setException(e.getMessage()); + } + + return dataSourceValidationResultData; + } + + private DataRepository getDataRepository(String profile) { + if (StringUtils.equalsIgnoreCase(profile, migrationContext.getDataSourceRepository().getDataSourceConfiguration().getProfile())) { + return migrationContext.getDataSourceRepository(); + } else if (StringUtils.equalsIgnoreCase(profile, migrationContext.getDataTargetRepository().getDataSourceConfiguration().getProfile())) { + return migrationContext.getDataTargetRepository(); + } else { + return null; + } + } + + @RequestMapping(value = + {"/generateSchemaScript"}, method = + {org.springframework.web.bind.annotation.RequestMethod.GET}) + @ResponseBody + public String generateSchemaScript() throws Exception { + logAction("Generate schema script button clicked"); + // ORACLE_TARGET -- start + migrationContext.refreshSelf(); + // ORACLE_TARGET -- END + return databaseSchemaDifferenceService.generateSchemaDifferencesSql(migrationContext); + } + + @RequestMapping(value = + {"/migrateSchema"}, method = + {org.springframework.web.bind.annotation.RequestMethod.POST}) + @ResponseBody + public String migrateSchema(@ModelAttribute("schemaSqlForm") SchemaSqlFormData data) { + try { + logAction("Execute script button clicked"); + // ORACLE_TARGET -- start + migrationContext.refreshSelf(); + // ORACLE_TARGET -- END + if (BooleanUtils.isTrue(data.getAccepted())) { + databaseSchemaDifferenceService.executeSchemaDifferencesSql(migrationContext, data.getSqlQuery()); + } else { + throw new IllegalStateException("Checkbox not accepted"); + } + } catch (Exception e) { + return ExceptionUtils.getStackTrace(e); + } + return "Successfully executed sql"; + } + + @RequestMapping(value = + {"/previewSchemaMigration"}, method = + {org.springframework.web.bind.annotation.RequestMethod.GET}) + @ResponseBody + public SchemaDifferenceResultContainerData previewSchemaMigration() throws Exception { + logAction("Preview schema migration changes button clicked"); + LOG.info("Starting preview of source and target db diff..."); + DefaultDatabaseSchemaDifferenceService.SchemaDifferenceResult difference = databaseSchemaDifferenceService.getDifference(migrationContext); + SchemaDifferenceResultData sourceSchemaDifferenceResultData = getSchemaDifferenceResultData(difference.getSourceSchema()); + SchemaDifferenceResultData targetSchemaDifferenceResultData = getSchemaDifferenceResultData(difference.getTargetSchema()); + SchemaDifferenceResultContainerData schemaDifferenceResultContainerData = new SchemaDifferenceResultContainerData(); + schemaDifferenceResultContainerData.setSource(sourceSchemaDifferenceResultData); + schemaDifferenceResultContainerData.setTarget(targetSchemaDifferenceResultData); + + Gson gson = new GsonBuilder().setPrettyPrinting().create(); + String timeStamp = new SimpleDateFormat("yyyy-MM-dd-HH-mm-ss").format(new Date()); + try { + InputStream is = new ByteArrayInputStream(gson.toJson(schemaDifferenceResultContainerData).getBytes(StandardCharsets.UTF_8)); + blobDatabaseMigrationReportStorageService.store("schema-differences-"+timeStamp+".json", is); + } catch (Exception e){ + LOG.error("Failed to save the schema differences report to blob storage!"); + } + return schemaDifferenceResultContainerData; + } + + private SchemaDifferenceResultData getSchemaDifferenceResultData(DefaultDatabaseSchemaDifferenceService.SchemaDifference diff) { + SchemaDifferenceResultData schemaDifferenceResultData = new SchemaDifferenceResultData(); + + Map missingTablesMap = diff.getMissingTables().stream() + .collect(Collectors.toMap(e -> getTableName(diff, e.getRightName()), e -> "")); + Map missingColumnsMap = diff.getMissingColumnsInTable().asMap().entrySet().stream() + .collect(Collectors.toMap(e -> getTableName(diff, e.getKey().getRightName()), e -> Joiner.on(";").join(e.getValue()))); + + Map map = new HashMap<>(); + map.putAll(missingTablesMap); + map.putAll(missingColumnsMap); + + String[][] result = new String[map.size()][2]; + int count = 0; + for (Map.Entry entry : map.entrySet()) { + result[count][0] = entry.getKey(); + result[count][1] = entry.getValue(); + count++; + } + + schemaDifferenceResultData.setResults(result); + return schemaDifferenceResultData; + } + + private String getTableName(DefaultDatabaseSchemaDifferenceService.SchemaDifference diff, String name) { + if (StringUtils.isNotEmpty(diff.getPrefix())) { + return String.format("%s", name); + } else { + return name; + } + } + + @RequestMapping(value = "/copyData", method = RequestMethod.PUT, produces = MediaType.APPLICATION_JSON_VALUE) + @ResponseBody + public MigrationStatus copyData() throws Exception { + logAction("Start data migration executed"); + // ORACLE_TARGET -- start + migrationContext.refreshSelf(); + // ORACLE_TARGET -- END + this.currentMigrationId = databaseMigrationService.startMigration(migrationContext); + return databaseMigrationService.getMigrationState(migrationContext, this.currentMigrationId); + } + + @RequestMapping(value = "/abortCopy", method = RequestMethod.PUT, produces = MediaType.APPLICATION_JSON_VALUE) + @ResponseBody + public String abortCopy(@RequestBody String migrationID) throws Exception { + logAction("Stop data migration executed"); + // ORACLE_TARGET -- start + migrationContext.refreshSelf(); + // ORACLE_TARGET -- END + databaseMigrationService.stopMigration(migrationContext, migrationID); + return "true"; + } + + @RequestMapping(value = "/resumeRunning", method = RequestMethod.GET) + @ResponseBody + public MigrationStatus resumeRunning() throws Exception { + if (StringUtils.isNotEmpty(this.currentMigrationId)) { + MigrationStatus migrationState = databaseMigrationService.getMigrationState(migrationContext, this.currentMigrationId); + prepareStateForJsonSerialization(migrationState); + return migrationState; + } else { + return null; + } + } + + @RequestMapping(value = "/copyStatus", method = RequestMethod.GET) + @ResponseBody + public MigrationStatus copyStatus(@RequestParam String migrationID, @RequestParam long since) throws Exception { + OffsetDateTime sinceTime = OffsetDateTime.ofInstant(Instant.ofEpochMilli(since), ZoneOffset.UTC); + MigrationStatus migrationState = databaseMigrationService.getMigrationState(migrationContext, migrationID, sinceTime); + prepareStateForJsonSerialization(migrationState); + return migrationState; + } + + private void prepareStateForJsonSerialization(MigrationStatus migrationState) { + migrationState.setStartEpoch(convertToEpoch(migrationState.getStart())); + migrationState.setStart(null); + migrationState.setEndEpoch(convertToEpoch(migrationState.getEnd())); + migrationState.setEnd(null); + migrationState.setLastUpdateEpoch(convertToEpoch(migrationState.getLastUpdate())); + migrationState.setLastUpdate(null); + + migrationState.getStatusUpdates().forEach(u -> { + u.setLastUpdateEpoch(convertToEpoch(u.getLastUpdate())); + u.setLastUpdate(null); + }); + } + + private Long convertToEpoch(LocalDateTime time) { + if (time == null) { + return null; + } + return time.toInstant(ZoneOffset.UTC).toEpochMilli(); + } + + @GetMapping( + value = "/copyReport", + produces = MediaType.APPLICATION_OCTET_STREAM_VALUE + ) + public @ResponseBody + byte[] getCopyReport(@RequestParam String migrationId, HttpServletResponse response) throws Exception { + logAction("Download migration report button clicked"); + response.setHeader("Content-Disposition", "attachment; filename=migration-report.json"); + Gson gson = new GsonBuilder().setPrettyPrinting().create(); + String json = gson.toJson(databaseMigrationService.getMigrationReport(migrationContext, migrationId)); + return json.getBytes(StandardCharsets.UTF_8.name()); + } + + @RequestMapping(value = "/switchPrefix", method = RequestMethod.PUT) + @ResponseBody + public Boolean switchPrefix(@RequestParam String prefix) throws Exception { + databaseMigrationSynonymService.recreateSynonyms(migrationContext.getDataTargetRepository(), prefix); + return Boolean.TRUE; + } + + @RequestMapping(value = "/metrics", method = RequestMethod.GET) + @ResponseBody + public List getMetrics() throws Exception { + return metricService.getMetrics(migrationContext); + } + + private void logAction(String message) { + LOG.info("{}: {} - User:{} - Time:{}", "CMT Action", message, userService.getCurrentUser().getUid(),LocalDateTime.now()); + } + + @RequestMapping(value = + {"/loadMigrationReports"}, method = + {org.springframework.web.bind.annotation.RequestMethod.GET}) + @ResponseBody + public List loadMigrationReports() { + try { + List blobs = blobDatabaseMigrationReportStorageService.listAllReports(); + List result = new ArrayList<>(); + blobs.forEach(blob -> { + ReportResultData reportResultData = new ReportResultData(); + reportResultData.setModifiedTimestamp(getSortableTimestamp(blob)); + reportResultData.setReportId(blob.getName()); + reportResultData.setPrimaryUri(blob.getUri().toString()); + result.add(reportResultData); + }); + return result; + } catch (Exception e) { + e.printStackTrace(); + } + return null; + } + + private String getSortableTimestamp(CloudBlockBlob blob) { + if(blob != null && blob.getProperties() != null) { + Date lastModified = blob.getProperties().getLastModified(); + if(lastModified != null) { + return DATE_TIME_FORMATTER.format(lastModified); + } + } + return Strings.EMPTY; + } + + @GetMapping( + value = "/downloadLogsReport", + produces = MediaType.APPLICATION_OCTET_STREAM_VALUE + ) + public @ResponseBody + ResponseEntity downloadLogsReport(@RequestParam String migrationId) throws Exception { + logAction("Download migration report button clicked"); + byte[] outputFile = blobDatabaseMigrationReportStorageService.getReport(migrationId); + HttpHeaders responseHeaders = new HttpHeaders(); + responseHeaders.set("charset", "utf-8"); + responseHeaders.setContentType(MediaType.valueOf("text/plain")); + responseHeaders.setContentLength(outputFile.length); + responseHeaders.set("Content-disposition", "attachment; filename=migration-report.json"); + return new ResponseEntity<>(outputFile, responseHeaders, HttpStatus.OK); + } + + + @RequestMapping(value = + {"/migrationReports"}, method = + {org.springframework.web.bind.annotation.RequestMethod.GET}) + public String reports(final Model model) { + logAction("Migration reports tab clicked"); + return "migrationReports"; + } + +} diff --git a/commercedbsynchac/hac/testclasses/de/hybris/platform/hac/controller/CommercemigrationhacControllerTest.class b/commercedbsynchac/hac/testclasses/de/hybris/platform/hac/controller/CommercemigrationhacControllerTest.class new file mode 100644 index 0000000..9d97625 Binary files /dev/null and b/commercedbsynchac/hac/testclasses/de/hybris/platform/hac/controller/CommercemigrationhacControllerTest.class differ diff --git a/commercedbsynchac/hac/testsrc/de/hybris/platform/hac/controller/CommercemigrationhacControllerTest.java b/commercedbsynchac/hac/testsrc/de/hybris/platform/hac/controller/CommercemigrationhacControllerTest.java new file mode 100644 index 0000000..b6a3e9e --- /dev/null +++ b/commercedbsynchac/hac/testsrc/de/hybris/platform/hac/controller/CommercemigrationhacControllerTest.java @@ -0,0 +1,55 @@ +/* + * [y] hybris Platform + * + * Copyright (c) 2018 SAP SE or an SAP affiliate company. All rights reserved. + * + * This software is the confidential and proprietary information of SAP + * ("Confidential Information"). You shall not disclose such Confidential + * Information and shall use it only in accordance with the terms of the + * license agreement you entered into with SAP. + */ +package de.hybris.platform.hac.controller; + +import de.hybris.bootstrap.annotations.IntegrationTest; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; + + +/** + * Test for {@link CommercemigrationhacController}. + */ +@IntegrationTest +public class CommercemigrationhacControllerTest { + + /** + * Code under test. + */ + protected CommercemigrationhacController cut; + + /** + * Set up the code under test. + */ + @Before + public void setup() { + cut = new CommercemigrationhacController(); + } + + /** + * Clean up the code under test. + */ + @After + public void teardown() { + cut = null; + } + + @Test + public void testSayHello() { + /* + final String helloText = cut.sayHello(); + + assertNotNull(helloText); + assertNotEquals(0, helloText.length()); + */ + } +} diff --git a/commercedbsynchac/project.properties b/commercedbsynchac/project.properties new file mode 100644 index 0000000..44b5307 --- /dev/null +++ b/commercedbsynchac/project.properties @@ -0,0 +1,9 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# +commercedbsynchac.key=value +# Specifies the location of the spring context file putted automatically to the global platform application context. +commercedbsynchac.application-context=commercedbsynchac-spring.xml +migration.from.hac.enabled=true \ No newline at end of file diff --git a/commercedbsynchac/resources/com/sap/cx/boosters/commercedbsynchac/dummy.txt b/commercedbsynchac/resources/com/sap/cx/boosters/commercedbsynchac/dummy.txt new file mode 100644 index 0000000..e69de29 diff --git a/commercedbsynchac/resources/commercedbsynchac-beans.xml b/commercedbsynchac/resources/commercedbsynchac-beans.xml new file mode 100644 index 0000000..edc74ad --- /dev/null +++ b/commercedbsynchac/resources/commercedbsynchac-beans.xml @@ -0,0 +1,70 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/commercedbsynchac/resources/commercedbsynchac-items.xml b/commercedbsynchac/resources/commercedbsynchac-items.xml new file mode 100644 index 0000000..0549dc5 --- /dev/null +++ b/commercedbsynchac/resources/commercedbsynchac-items.xml @@ -0,0 +1,42 @@ + + + + + + + + + + diff --git a/commercedbsynchac/resources/commercedbsynchac-spring.xml b/commercedbsynchac/resources/commercedbsynchac-spring.xml new file mode 100644 index 0000000..2469c7e --- /dev/null +++ b/commercedbsynchac/resources/commercedbsynchac-spring.xml @@ -0,0 +1,48 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/commercedbsynchac/resources/commercedbsynchac-tab-config.json b/commercedbsynchac/resources/commercedbsynchac-tab-config.json new file mode 100644 index 0000000..9903dcd --- /dev/null +++ b/commercedbsynchac/resources/commercedbsynchac-tab-config.json @@ -0,0 +1,28 @@ +[ + { + "basePath": "/commercedbsynchac", + "mainTabLabel": "Migration", + "subTabs": [ + { + "path": "/migrationDataSource", + "label": "Data Sources", + "skipPrefix": false + }, + { + "path": "/migrationSchema", + "label": "Schema Migration", + "skipPrefix": false + }, + { + "path": "/migrationData", + "label": "Data Migration", + "skipPrefix": false + }, + { + "path": "/migrationReports", + "label": "Reports", + "skipPrefix": false + } + ] + } +] diff --git a/commercedbsynchac/resources/commercedbsynchac-without-migration-tab-config.json b/commercedbsynchac/resources/commercedbsynchac-without-migration-tab-config.json new file mode 100644 index 0000000..0d4f101 --- /dev/null +++ b/commercedbsynchac/resources/commercedbsynchac-without-migration-tab-config.json @@ -0,0 +1,2 @@ +[ +] diff --git a/commercedbsynchac/resources/localization/i2ihac-locales_de.properties b/commercedbsynchac/resources/localization/i2ihac-locales_de.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsynchac/resources/localization/i2ihac-locales_de.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsynchac/resources/localization/i2ihac-locales_en.properties b/commercedbsynchac/resources/localization/i2ihac-locales_en.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsynchac/resources/localization/i2ihac-locales_en.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsynchac/resources/localization/i2ihac-locales_es.properties b/commercedbsynchac/resources/localization/i2ihac-locales_es.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsynchac/resources/localization/i2ihac-locales_es.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsynchac/resources/localization/i2ihac-locales_fr.properties b/commercedbsynchac/resources/localization/i2ihac-locales_fr.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsynchac/resources/localization/i2ihac-locales_fr.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsynchac/resources/localization/i2ihac-locales_it.properties b/commercedbsynchac/resources/localization/i2ihac-locales_it.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsynchac/resources/localization/i2ihac-locales_it.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsynchac/resources/localization/i2ihac-locales_ja.properties b/commercedbsynchac/resources/localization/i2ihac-locales_ja.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsynchac/resources/localization/i2ihac-locales_ja.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsynchac/resources/localization/i2ihac-locales_ko.properties b/commercedbsynchac/resources/localization/i2ihac-locales_ko.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsynchac/resources/localization/i2ihac-locales_ko.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsynchac/resources/localization/i2ihac-locales_pt.properties b/commercedbsynchac/resources/localization/i2ihac-locales_pt.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsynchac/resources/localization/i2ihac-locales_pt.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsynchac/resources/localization/i2ihac-locales_ru.properties b/commercedbsynchac/resources/localization/i2ihac-locales_ru.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsynchac/resources/localization/i2ihac-locales_ru.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsynchac/resources/localization/i2ihac-locales_zh.properties b/commercedbsynchac/resources/localization/i2ihac-locales_zh.properties new file mode 100644 index 0000000..e214d48 --- /dev/null +++ b/commercedbsynchac/resources/localization/i2ihac-locales_zh.properties @@ -0,0 +1,6 @@ +# +# Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. +# License: Apache-2.0 +# +# + diff --git a/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/CommercedbsynchacStandalone.java b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/CommercedbsynchacStandalone.java new file mode 100644 index 0000000..65f6381 --- /dev/null +++ b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/CommercedbsynchacStandalone.java @@ -0,0 +1,44 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsynchac; + +import de.hybris.platform.core.Registry; +import de.hybris.platform.jalo.JaloSession; +import de.hybris.platform.util.RedeployUtilities; +import de.hybris.platform.util.Utilities; + + +/** + * Demonstration of how to write a standalone application that can be run directly from within eclipse or from the + * commandline.
+ * To run this from commandline, just use the following command:
+ * + * java -jar bootstrap/bin/ybootstrap.jar "new commercedbsynchac.CommercedbsynchacStandalone().run();" + * From eclipse, just run as Java Application. Note that you maybe need to add all other projects like + * ext-commerce, ext-pim to the Launch configuration classpath. + */ +public class CommercedbsynchacStandalone { + /** + * Main class to be able to run it directly as a java program. + * + * @param args the arguments from commandline + */ + public static void main(final String[] args) { + new CommercedbsynchacStandalone().run(); + } + + public void run() { + Registry.activateStandaloneMode(); + Registry.activateMasterTenant(); + + final JaloSession jaloSession = JaloSession.getCurrentSession(); + System.out.println("Session ID: " + jaloSession.getSessionID()); //NOPMD + System.out.println("User: " + jaloSession.getUser()); //NOPMD + Utilities.printAppInfo(); + + RedeployUtilities.shutdown(); + } +} diff --git a/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/constants/YhacextConstants.java b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/constants/YhacextConstants.java new file mode 100644 index 0000000..da69f92 --- /dev/null +++ b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/constants/YhacextConstants.java @@ -0,0 +1,21 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ +package com.sap.cx.boosters.commercedbsynchac.constants; + +import com.sap.cx.boosters.commercedbsynchac.constants.GeneratedYhacextConstants; + +/** + * Global class for all Commercedbsynchac constants. You can add global constants for your extension into this class. + */ +public final class YhacextConstants extends GeneratedYhacextConstants { + public static final String EXTENSIONNAME = "commercedbsynchac"; + + private YhacextConstants() { + //empty to avoid instantiating this constant class + } + + // implement here constants used by this extension +} diff --git a/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/MetricService.java b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/MetricService.java new file mode 100644 index 0000000..a0131e8 --- /dev/null +++ b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/MetricService.java @@ -0,0 +1,16 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsynchac.metric; + +import de.hybris.platform.commercedbsynchac.data.MetricData; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; + +import java.util.List; + +public interface MetricService { + List getMetrics(MigrationContext context); +} diff --git a/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/impl/DefaultMetricService.java b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/impl/DefaultMetricService.java new file mode 100644 index 0000000..b63be52 --- /dev/null +++ b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/impl/DefaultMetricService.java @@ -0,0 +1,42 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsynchac.metric.impl; + +import de.hybris.platform.commercedbsynchac.data.MetricData; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsynchac.metric.MetricService; +import com.sap.cx.boosters.commercedbsynchac.metric.populator.MetricPopulator; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.ArrayList; +import java.util.List; + +public class DefaultMetricService implements MetricService { + + private static final Logger LOG = LoggerFactory.getLogger(DefaultMetricService.class); + + private List populators; + + public DefaultMetricService(List populators) { + this.populators = populators; + } + + @Override + public List getMetrics(MigrationContext context) { + List dataList = new ArrayList<>(); + for (MetricPopulator populator : populators) { + try { + dataList.add(populator.populate(context)); + } catch (Exception e) { + LOG.error("Error while populating metric. Populator: " + e.getMessage()); + } + } + return dataList; + } + +} diff --git a/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/MetricPopulator.java b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/MetricPopulator.java new file mode 100644 index 0000000..3e328d4 --- /dev/null +++ b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/MetricPopulator.java @@ -0,0 +1,26 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsynchac.metric.populator; + +import de.hybris.platform.commercedbsynchac.data.MetricData; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; + +public interface MetricPopulator { + static String PRIMARY_STANDARD_COLOR = "#92cae4"; + static String PRIMARY_CRITICAL_COLOR = "#de5d70"; + static String SECONDARY_STANDARD_COLOR = "#d5edf8"; + static String SECONDARY_CRITICAL_COLOR = "#e8acb5"; + + MetricData populate(MigrationContext context) throws Exception; + + default void populateColors(MetricData data) { + data.setPrimaryValueStandardColor(PRIMARY_STANDARD_COLOR); + data.setPrimaryValueCriticalColor(PRIMARY_CRITICAL_COLOR); + data.setSecondaryValueStandardColor(SECONDARY_STANDARD_COLOR); + data.setSecondaryValueCriticalColor(SECONDARY_CRITICAL_COLOR); + } +} diff --git a/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/CpuMetricPopulator.java b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/CpuMetricPopulator.java new file mode 100644 index 0000000..34f0867 --- /dev/null +++ b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/CpuMetricPopulator.java @@ -0,0 +1,44 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsynchac.metric.populator.impl; + +import com.sap.cx.boosters.commercedbsynchac.metric.populator.MetricPopulator; +import de.hybris.platform.commercedbsynchac.data.MetricData; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import org.springframework.beans.factory.annotation.Value; + +import java.lang.management.OperatingSystemMXBean; + +public class CpuMetricPopulator implements MetricPopulator { + + @Value("#{T(java.lang.management.ManagementFactory).getOperatingSystemMXBean()}") + private OperatingSystemMXBean operatingSystemMXBean; + + @Override + public MetricData populate(MigrationContext context) throws Exception { + MetricData data = new MetricData(); + double systemLoadAverage = operatingSystemMXBean.getSystemLoadAverage(); + int availableProcessors = operatingSystemMXBean.getAvailableProcessors(); + int loadAverage = (int) (systemLoadAverage * 100 / availableProcessors); + if (loadAverage > 100) { + loadAverage = 100; + } + data.setMetricId("cpu"); + data.setName("CPU"); + data.setDescription("The system load in percent"); + data.setPrimaryValue((double) loadAverage); + data.setPrimaryValueLabel("Load"); + data.setPrimaryValueUnit("%"); + data.setPrimaryValueThreshold(90d); + data.setSecondaryValue((double) 100 - loadAverage); + data.setSecondaryValueLabel("Idle"); + data.setSecondaryValueUnit("%"); + data.setSecondaryValueThreshold(0d); + populateColors(data); + return data; + } +} diff --git a/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/DTUMetricPopulator.java b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/DTUMetricPopulator.java new file mode 100644 index 0000000..c1cc45d --- /dev/null +++ b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/DTUMetricPopulator.java @@ -0,0 +1,42 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsynchac.metric.populator.impl; + +import com.sap.cx.boosters.commercedbsynchac.metric.populator.MetricPopulator; +import de.hybris.platform.commercedbsynchac.data.MetricData; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; + +public class DTUMetricPopulator implements MetricPopulator { + @Override + public MetricData populate(MigrationContext context) throws Exception { + MetricData data = new MetricData(); + int primaryValue = (int) context.getDataTargetRepository().getDatabaseUtilization(); + if (primaryValue > 100) { + primaryValue = 100; + } + int secondaryValue = 100 - primaryValue; + if (primaryValue < 0) { + primaryValue = -1; + secondaryValue = -1; + } + + data.setMetricId("dtu"); + data.setName("DTU"); + data.setDescription("The current DTU utilization of the azure database"); + data.setPrimaryValue((double) primaryValue); + data.setPrimaryValueLabel("Used"); + data.setPrimaryValueUnit("%"); + data.setPrimaryValueThreshold(90d); + data.setSecondaryValue((double) secondaryValue); + data.setSecondaryValueLabel("Idle"); + data.setSecondaryValueUnit("%"); + data.setSecondaryValueThreshold(0d); + populateColors(data); + return data; + } + +} diff --git a/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/HikariConnectionMetricPopulator.java b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/HikariConnectionMetricPopulator.java new file mode 100644 index 0000000..29b01b6 --- /dev/null +++ b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/HikariConnectionMetricPopulator.java @@ -0,0 +1,47 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsynchac.metric.populator.impl; + +import com.sap.cx.boosters.commercedbsynchac.metric.populator.MetricPopulator; +import com.zaxxer.hikari.HikariDataSource; +import de.hybris.platform.commercedbsynchac.data.MetricData; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; + +import javax.sql.DataSource; + +public abstract class HikariConnectionMetricPopulator implements MetricPopulator { + + @Override + public MetricData populate(MigrationContext context) throws Exception { + if (!(getDataSource(context) instanceof HikariDataSource)) { + throw new RuntimeException("Populator cannot be used for non-hikari datasources"); + } + MetricData data = new MetricData(); + HikariDataSource hikariDS = (HikariDataSource) getDataSource(context); + double activeConnections = hikariDS.getHikariPoolMXBean().getActiveConnections(); + double maxConnections = hikariDS.getHikariConfigMXBean().getMaximumPoolSize(); + data.setMetricId(getMetricId(context)); + data.setName(getName(context)); + data.setDescription("The proportion of active and idle hikari connections"); + data.setPrimaryValue(activeConnections); + data.setPrimaryValueLabel("Active"); + data.setPrimaryValueUnit("#"); + data.setPrimaryValueThreshold((double) maxConnections); + data.setSecondaryValue(maxConnections - activeConnections); + data.setSecondaryValueLabel("Idle"); + data.setSecondaryValueUnit("#"); + data.setSecondaryValueThreshold(0d); + populateColors(data); + return data; + } + + protected abstract String getMetricId(MigrationContext context); + + protected abstract String getName(MigrationContext context); + + protected abstract DataSource getDataSource(MigrationContext context); +} diff --git a/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/HikariSourceConnectionMetricPopulator.java b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/HikariSourceConnectionMetricPopulator.java new file mode 100644 index 0000000..b7b105f --- /dev/null +++ b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/HikariSourceConnectionMetricPopulator.java @@ -0,0 +1,30 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsynchac.metric.populator.impl; + + +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; + +import javax.sql.DataSource; + +public class HikariSourceConnectionMetricPopulator extends HikariConnectionMetricPopulator { + + @Override + protected String getMetricId(MigrationContext context) { + return "hikari-source-pool"; + } + + @Override + protected String getName(MigrationContext context) { + return "Source DB Pool"; + } + + @Override + protected DataSource getDataSource(MigrationContext context) { + return context.getDataSourceRepository().getDataSource(); + } +} diff --git a/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/HikariTargetConnectionMetricPopulator.java b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/HikariTargetConnectionMetricPopulator.java new file mode 100644 index 0000000..f83041c --- /dev/null +++ b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/HikariTargetConnectionMetricPopulator.java @@ -0,0 +1,30 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsynchac.metric.populator.impl; + + +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; + +import javax.sql.DataSource; + +public class HikariTargetConnectionMetricPopulator extends HikariConnectionMetricPopulator { + + @Override + protected String getMetricId(MigrationContext context) { + return "hikari-target-pool"; + } + + @Override + protected String getName(MigrationContext context) { + return "Target DB Pool"; + } + + @Override + protected DataSource getDataSource(MigrationContext context) { + return context.getDataTargetRepository().getDataSource(); + } +} diff --git a/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/IOMetricPopulator.java b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/IOMetricPopulator.java new file mode 100644 index 0000000..580dd00 --- /dev/null +++ b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/IOMetricPopulator.java @@ -0,0 +1,48 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsynchac.metric.populator.impl; + +import de.hybris.platform.commercedbsynchac.data.MetricData; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceCategory; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceProfiler; +import com.sap.cx.boosters.commercedbsync.performance.PerformanceUnit; +import com.sap.cx.boosters.commercedbsynchac.metric.populator.MetricPopulator; + +public class IOMetricPopulator implements MetricPopulator { + + private PerformanceProfiler performanceProfiler; + + public IOMetricPopulator(PerformanceProfiler performanceProfiler) { + this.performanceProfiler = performanceProfiler; + } + + @Override + public MetricData populate(MigrationContext context) throws Exception { + MetricData data = new MetricData(); + int avgRowReading = (int) performanceProfiler.getAverageByCategoryAndUnit(PerformanceCategory.DB_READ, PerformanceUnit.ROWS); + int avgRowWriting = (int) performanceProfiler.getAverageByCategoryAndUnit(PerformanceCategory.DB_WRITE, PerformanceUnit.ROWS); + int totalIO = avgRowReading + avgRowWriting; + if (avgRowReading < 1 && avgRowWriting < 1) { + avgRowReading = -1; + avgRowWriting = -1; + } + data.setMetricId("db-io"); + data.setName("Database I/O"); + data.setDescription("The proportion of items read from source compared to items written to target"); + data.setPrimaryValue((double) avgRowReading); + data.setPrimaryValueLabel("Read"); + data.setPrimaryValueUnit("rows/s"); + data.setPrimaryValueThreshold(totalIO * 0.75); + data.setSecondaryValue((double) avgRowWriting); + data.setSecondaryValueLabel("Write"); + data.setSecondaryValueUnit("rows/s"); + data.setSecondaryValueThreshold(totalIO * 0.75); + populateColors(data); + return data; + } +} diff --git a/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/MemoryMetricPopulator.java b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/MemoryMetricPopulator.java new file mode 100644 index 0000000..8a75b38 --- /dev/null +++ b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/MemoryMetricPopulator.java @@ -0,0 +1,35 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsynchac.metric.populator.impl; + +import com.sap.cx.boosters.commercedbsynchac.metric.populator.MetricPopulator; +import de.hybris.platform.commercedbsynchac.data.MetricData; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; + +public class MemoryMetricPopulator implements MetricPopulator { + @Override + public MetricData populate(MigrationContext context) throws Exception { + MetricData data = new MetricData(); + Runtime runtime = Runtime.getRuntime(); + double freeMemory = runtime.freeMemory() / 1048576L; + double totalMemory = runtime.totalMemory() / 1048576L; + double usedMemory = totalMemory - freeMemory; + data.setMetricId("memory"); + data.setName("Memory"); + data.setDescription("The proportion of free and used memory"); + data.setPrimaryValue(usedMemory); + data.setPrimaryValueLabel("Used"); + data.setPrimaryValueUnit("MB"); + data.setPrimaryValueThreshold(totalMemory * 0.9); + data.setSecondaryValue(freeMemory); + data.setSecondaryValueLabel("Free"); + data.setSecondaryValueUnit("MB"); + data.setSecondaryValueThreshold(0d); + populateColors(data); + return data; + } +} diff --git a/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/TaskExecutorMetricPopulator.java b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/TaskExecutorMetricPopulator.java new file mode 100644 index 0000000..7bbad47 --- /dev/null +++ b/commercedbsynchac/src/com/sap/cx/boosters/commercedbsynchac/metric/populator/impl/TaskExecutorMetricPopulator.java @@ -0,0 +1,49 @@ +/* + * Copyright: 2022 SAP SE or an SAP affiliate company and commerce-db-synccontributors. + * License: Apache-2.0 + * + */ + +package com.sap.cx.boosters.commercedbsynchac.metric.populator.impl; + +import com.sap.cx.boosters.commercedbsynchac.metric.populator.MetricPopulator; +import de.hybris.platform.commercedbsynchac.data.MetricData; +import org.apache.commons.lang.StringUtils; +import com.sap.cx.boosters.commercedbsync.context.MigrationContext; +import org.springframework.core.task.AsyncTaskExecutor; +import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor; + +public class TaskExecutorMetricPopulator implements MetricPopulator { + + private AsyncTaskExecutor executor; + private String name; + + public TaskExecutorMetricPopulator(AsyncTaskExecutor executor, String name) { + this.executor = executor; + this.name = name; + } + + @Override + public MetricData populate(MigrationContext context) throws Exception { + if (!(executor instanceof ThreadPoolTaskExecutor)) { + throw new RuntimeException("Populator can only be used for " + ThreadPoolTaskExecutor.class.getName()); + } + ThreadPoolTaskExecutor taskExecutor = (ThreadPoolTaskExecutor) executor; + MetricData data = new MetricData(); + int activeCount = taskExecutor.getActiveCount(); + int maxPoolSize = taskExecutor.getMaxPoolSize(); + data.setMetricId(name + "-executor"); + data.setName(StringUtils.capitalize(name) + " Tasks"); + data.setDescription("The tasks running in parallel in the task executor"); + data.setPrimaryValue((double) activeCount); + data.setPrimaryValueLabel("Running"); + data.setPrimaryValueUnit("#"); + data.setPrimaryValueThreshold(-1d); + data.setSecondaryValue((double) maxPoolSize - activeCount); + data.setSecondaryValueLabel("Free"); + data.setSecondaryValueUnit("#"); + data.setSecondaryValueThreshold(-1d); + populateColors(data); + return data; + } +} diff --git a/docs/commercedbsync/after_save_listener_1.png b/docs/commercedbsync/after_save_listener_1.png new file mode 100644 index 0000000..f821af6 Binary files /dev/null and b/docs/commercedbsync/after_save_listener_1.png differ diff --git a/docs/commercedbsync/after_save_listener_2.png b/docs/commercedbsync/after_save_listener_2.png new file mode 100644 index 0000000..e4e9744 Binary files /dev/null and b/docs/commercedbsync/after_save_listener_2.png differ diff --git a/docs/configuration/CONFIGURATION-GUIDE.md b/docs/configuration/CONFIGURATION-GUIDE.md new file mode 100644 index 0000000..986c9d2 --- /dev/null +++ b/docs/configuration/CONFIGURATION-GUIDE.md @@ -0,0 +1,42 @@ +# SAP Commerce DB Sync - Configuration Guide + +## Configuration reference + +[Configuration Reference](CONFIGURATION-REFERENCE.md) To get an overview of the configurable properties. + +## Configure incremental data migration + +For large tables, it often makes sense to copy the bulk of data before the cutover, and then only copy the rows that have changed in a given time frame. This helps to reduce the cutover window for production systems. +To configure the incremental copy, set the following properties: +``` +migration.data.incremental.enabled= +migration.data.incremental.tables= +migration.data.incremental.timestamp= +migration.data.truncate.enabled= +``` +example: +``` +migration.data.incremental.enabled=true +migration.data.incremental.tables=orders,orderentries +migration.data.incremental.timestamp=2020-07-28T18:44:00+01:00[Europe/Zurich] +migration.data.truncate.enabled=false +``` + +> **LIMITATION**: Tables must have the following columns: modifiedTS, PK. Furthermore, this is an incremental approach... only modified and inserted rows are taken into account. Deletions on the source side are not handled. + +The timestamp refers to whatever timezone the source database is using (make sure to include the timezone). + +During the migration, the data copy process is using an UPSERT command to make sure new records are inserted and modified records are updated. Also make sure to disable truncation as this is not desired for incremental copy. + +Only tables configured for incremental will be taken into consideration, as long as they are not already excluded by the general filter properties. All other tables will be ignored. + +After the incremental migration you may have to migrate the numberseries table again, to ensure the PK generation will be aligned. +For this, disable incremental mode and use the property migration.data.tables.included to only migrate that one table. + +## Configure logging + +Use the following property to configure the log level: + +log4j2.logger.migrationToolkit.level + +Default value is INFO. diff --git a/docs/configuration/CONFIGURATION-REFERENCE.md b/docs/configuration/CONFIGURATION-REFERENCE.md new file mode 100644 index 0000000..fdd6e6c --- /dev/null +++ b/docs/configuration/CONFIGURATION-REFERENCE.md @@ -0,0 +1,46 @@ +# SAP Commerce DB Sync - Configuration Reference + + +| Property | Mandatory | Default | Description | +|--------------------------------------------------------|-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------| +| migration.ds.source.db.driver | yes | | DB driver class for source connection | +| migration.ds.source.db.url | yes | | DB url for source connection | +| migration.ds.source.db.username | yes | | DB username for source connection | +| migration.ds.source.db.password | yes | | DB password for source connection | +| migration.ds.source.db.tableprefix | no | | DB table prefix for source connection | +| migration.ds.source.db.schema | yes | | DB schema for source connection | +| migration.ds.source.db.connection.pool.size.idle.min | no | ${db.pool.minIdle} | Min idle connections in source db pool | +| migration.ds.source.db.connection.pool.size.active.max | no | ${db.pool.maxActive} | Min active connections in source db pool | +| migration.ds.target.db.driver | no | ${db.driver} | DB driver class for target connection | +| migration.ds.target.db.url | no | ${db.url} | DB url for target connection | +| migration.ds.target.db.username | no | ${db.username} | DB username for target connection | +| migration.ds.target.db.password | no | ${db.password} | DB password for target connection | +| migration.ds.target.db.tableprefix | no | ${db.tableprefix} | DB table prefix for target connection | +| migration.ds.target.db.schema | no | dbo | DB schema for target connection | +| migration.ds.target.db.connection.pool.size.idle.min | no | ${db.pool.minIdle} | Min idle connections in target db pool | +| migration.ds.target.db.connection.pool.size.active.max | no | ${db.pool.maxActive} | Min active connections in target db pool | +| migration.ds.target.db.max.stage.migrations | no | 5 | The maximum amount of staged table sets allowed. | +| migration.schema.enabled | no | true | Enable schema adaption features | +| migration.schema.target.tables.add.enabled | no | false | Allow adding missing tables to target schema | +| migration.schema.target.columns.add.enabled | no | true | Allow adding missing columns to target table schema | +| migration.schema.target.columns.remove.enabled | no | true | Allow removing extra columns from target table schema | +| migration.data.reader.batchsize | no | 1000 | batch size when reading data from source table | +| migration.data.workers.writer.maxtasks | no | 10 | maximum number of writer workers per table that can be executed in parallel | +| migration.data.workers.reader.maxtasks | no | 3 | maximum number of reader workers per table that can be executed in parallel | +| migration.data.workers.retryattempts | no | 0 | retry attempts if a batch (read or write) failed. | +| migration.data.truncate.enabled | no | true | Allow truncating the target table before writing data | +| migration.data.truncate.excluded | no | | If truncating enabled, exclude these tables. Comma seperated list | +| migration.data.maxparalleltablecopy | no | 2 | Tables copied in parallel | +| migration.data.columns.excluded.{table} | no | | Columns to be ignored when writing data to target table. The {table} value has to be replaced with the table name, the property value is a comma separated list of column names. | +| migration.data.columns.nullify.{table} | no | | Column values to be nullified when writing data to target table. The {table} value has to be replaced with the table name, the property value is a comma separated list of column names. | +| migration.data.indices.disable.enabled | no | false | Disable indices temporarily before writing data to target table and reenable them after the writing operation. | +| migration.data.indices.drop.enabled | no | false | Drop indices before writing data to target table. | +| migration.data.tables.excluded | no | SYSTEMINIT | Tables to be excluded in migration. If migration.data.tables.included is set, this property is ignored | +| migration.data.tables.included | no | | Tables to be included in migration. If migration.data.tables.excluded is set, this property is ignored | +| migration.data.report.connectionstring | yes | ${media.globalSettings.cloudAzureBlobStorageStrategy.connection} | target blob storage for the report generation | +| migration.data.incremental.enabled | no | false | enables the incremental mode | +| migration.data.incremental.tables | no | | enables the incremental mode | Only these tables will be taken into account for incremental migration +| migration.data.incremental.timestamp | no | | The timestamp in ISO-8601 local date time format. Records created or modified after this timestamp will be copied only. +| migration.data.pipe.timeout | no | 7200 | The max time the pipe can blocked if it is running full before it times out. +| migration.data.pipe.capacity | no | 100 | The maximum amount of element the pipe can handle before it starts blocking. +| migration.stalled.timeout | no | 7200 | The time after which the pipe (and hence the migration) will be marked as stalled. diff --git a/docs/developer/DEVELOPER-GUIDE.md b/docs/developer/DEVELOPER-GUIDE.md new file mode 100644 index 0000000..449a55c --- /dev/null +++ b/docs/developer/DEVELOPER-GUIDE.md @@ -0,0 +1,104 @@ +# SAP Commerce DB Sync - Developer Guide + +## Quick Start + +To install SAP Commmerce DB Sync, follow these steps: + +Add the following extensions to your localextensions.xml: +``` + + +``` + +Make sure you add the source db driver to commercemigration/lib if necessary. + +Use the following sample configuration and add it to your local.properties file: + +``` +migration.ds.source.db.driver=com.mysql.jdbc.Driver +migration.ds.source.db.url=jdbc:mysql://localhost:3600/localdev?useConfigs=maxPerformance&characterEncoding=utf8&useTimezone=true&serverTimezone=UTC&nullCatalogMeansCurrent=true +migration.ds.source.db.username=[user] +migration.ds.source.db.password=[password] +migration.ds.source.db.tableprefix= +migration.ds.source.db.schema=localdev + +migration.ds.target.db.driver=${db.driver} +migration.ds.target.db.url=${db.url} +migration.ds.target.db.username=${db.username} +migration.ds.target.db.password=${db.password} +migration.ds.target.db.tableprefix=${db.tableprefix} +migration.ds.target.db.catalog=${db.catalog} +migration.ds.target.db.schema=dbo + +``` + +## Running Integration Tests + +Make sure the junit tenant is installed +- set 'installed.tenants=junit' in local.properties +- run 'ant yunitinit' from platformhome + +Go to the commercemigrationtest extension, like so: + +``` +>cd commercemigrationtest +>ant all integrationtests +``` + +Alternatively go to the platformhome, and trigger it from there: + +``` +platformhome>ant all integrationtests -Dtestclasses.packages=org.sap.move.commercemigrationtest.integration.* +``` + +The integration tests are parameterized with predefined combinations of source and target databases. +Running the integration tests will bootstrap several database containers using docker and run tests annotated with '@Test', once for each parameter combination. + +> **PREREQUISITE**: Make sure docker is installed on your local machine and allocate sufficient memory (~6gb). Also ensure you provide all necessary jdbc drivers for the test execution. + +## Connect to existing DB servers for integration tests + +If env var `CI` is set, the integration tests will not start a Docker container for every DB, but +connect to existing servers instead. + +You can use the `*_HOST`, `*_USR` and `*_PSW` env vars to configure server and user credentials.\ +**User / password must be of an admin user! (that is allowed to create schemas/DBs, users etc.)** + +(Check out [direnv](https://direnv.net/) to automate setting up those environment variables for +local development) + + +```sh +export CI=true +# do not drop schemas after each test class -> faster CI runs +# only enable this property if you kill your DB containers regularly +# export CI_SKIP_DROP=true +export MSSQL_HOST=localhost:1433 +export MSSQL_USR=sa +export MSSQL_PSW=localSAPassw0rd + +export MYSQL_HOST=localhost:3306 +export MYSQL_USR=root +export MYSQL_PSW=root + +export ORACLE_HOST=localhost:1521 +export ORACLE_USR=system +export ORACLE_PSW=oracle + +export HANA_HOST=localhost:39017 +export HANA_USR=SYSTEM +export HANA_PSW=HXEHana1 +``` + + +## Contributing to the Commerce Migration Toolkit + +To contribute to the Commerce Migration Toolkit, follow these steps: + +1. Fork this repository; +2. Create a branch: `git checkout -b `; +3. Make your changes and commit them: `git commit -m ''`; +4. Push to the original branch: `git push origin /`; +5. Create the pull request. + +Alternatively, see the GitHub documentation on [creating a pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request). diff --git a/docs/performance/PERFORMANCE-GUIDE.md b/docs/performance/PERFORMANCE-GUIDE.md new file mode 100644 index 0000000..a291156 --- /dev/null +++ b/docs/performance/PERFORMANCE-GUIDE.md @@ -0,0 +1,136 @@ +# SAP Commerce DB Sync - Performance Guide + + +## Benchmarks + +### AWS to SAP Commerce Cloud + +Source Database: + +* AWS Mysql: db.m6g.large +* Tables: 974 +* Row Count: 158'855'795 +* Total Volume at source (incl. Indexes): 51 GB + +Results: + +| Tier | Mem | CPU | Duration | parTables | rWorkers | wWorkers | batchSize | disIdx | DB size at target | +|------|-----|-----|----------|-----------|----------|----------|-----------|--------|-------------------| +| S12 | 4GB | 2 | 2h11m | 2 | 5 | 15 | 2000 | TRUE | 72GB | +| S12 | 4GB | 2 | 3h4m | 2 | 5 | 15 | 2000 | FALSE | 92GB | +| S12 | 4GB | 2 | 2h59m | 2 | 5 | 15 | 4000 | FALSE | 92GB | +| S12 | 6GB | 2 | 2h53m | 2 | 10 | 20 | 3000 | FALSE | 92GB | +| S12 | 6GB | 2 | 2h09m | 2 | 5 | 15 | 3000 | TRUE | 72GB | +| S12 | 6GB | 6 | 1h35m | 2 | 5 | 15 | 3000 | TRUE | 72GB | +| S12 | 8GB | 6 | 1h30m | 2 | 10 | 30 | 3000 | TRUE | 75GB | + +> **NOTE**: DB size differs in source and target due to different storage concepts (indexes). + +## Technical Concept + + +![performance technical concept](performance_architecture.png) + + +### Scheduler + +The table scheduler is responsible for triggering the copy process for each table. +The set of tables the scheduler actually works with is based on the copy item provider and the respective filters configured. How many tables can be scheduled in parallel is determined by the following property: + +`migration.data.maxparalleltablecopy` + + + +### Reader Workers + +Each scheduled table will get a set of reader workers. The source table will be read using the 'keyset/seek' pagination, if possible. For this, a unique key will be identified (typically 'PK' or 'ID') and out of this the parallel batches can be determined. In case this is not possible, the readers will fall back to offset pagination. +Each reader worker is using its own db connection. +How many reader workers a table can have is defined by the following property: + +`migration.data.workers.reader.maxtasks` + +The size of the batches each reader will query depends on the following property: + +`migration.data.reader.batchsize` + +### Blocking Pipe + +The batches read by the reader workers will be written to a blocking pipe as wrapped datasets. +The pipe is a blocking queue that can be used to throttle the throughput and is configurable in this way: + +`migration.data.pipe.timeout` + +`migration.data.pipe.capacity` + +The pipe will throw an exception if it has been blocked for too long (maybe because the writers are too slow). +Default value for the timeout should be enough though. +If the pipe is running full by reaching the max capacity, it will block and wait until the writers free-up space in it. + + +### Writer Workers + +The writers will read from the pipe until the pipe is sealed. Each dataset will then be written to the database in a prepared statement / batch insert way. Each writer batch is using its own db connection and transaction (one commit per batch). In case the batch insert fails, a rollback happens. +How many writer workers a table can have is defined by the following property: + +`migration.data.workers.writer.maxtasks` + +The batch size for the writers is bound to the readers batch size. + +## Perfomance Tuning + +### Degree of Parallelization + +In most cases there are a lot of small tables and few very large tables. This leads to the fact that the duration of the overall migration mostly depends on these large tables. Increasing the number of parallel tables won't help to speed up large tables, instead the number workers should be increased. The workers influence how fast a single table can be migrated, since the more workers there are the more batches of the large table can be executed in parallel. Therefore, it makes sense to reduce the parallel tables to a rather low number to save resources on the infrastructure and in turn use the resources for increased batch parallelisation in the large tables. + +How many workers for both readers and writer should be set, depends on the power of the involved databases and the underlying infrastructure. +Since reading is typically faster than writing a ratio of 1:3 (3 writer workers for 1 one reader worker) should be ok. +Have a look at the benchmarks to see how far you can go with the parallelisation. +Keep in mind that processing 2 tables in parallel already leads to `2 * rWorkers + 2 * wWorkers` threads / connections in total. + + +### Memory & CPU + +By increasing the parallelization degree you can easily overload the system, which may lead to out of memory. + +> **NOTE**: On SAP Commerce Cloud, an out of memory exception can sometimes be hidden. Typically you know you were running out of memory if the pod (backoffice) immediately shuts down or restarts without further notice (related to SAP Commerce Cloud health checks). + +Have a close look at the memory metrics and make sure it is in a healthy range throughout the copy process. +To solve memory issues, either decrease the degree of parallelization or reduce the capacity of the data pipe as such. + + +### DB Connections + +Some properties may impact each other which can lead to side effects. + +Given: + +`migration.data.maxparalleltablecopy` + +`migration.data.workers.reader.maxtasks` + +`migration.data.workers.writer.maxtasks` + +`migration.ds.source.db.connection.pool.size.active.max` + +`migration.ds.target.db.connection.pool.size.active.max` + + +The amount of database connections can be defined as follows: + +`#[dbconnectionstarget] >= #[maxparalleltablecopy] * #[maxwritertasks]` + +`#[dbconnectionssource] >= #[maxparalleltablecopy] * #[maxreadertasks]` + +### Disabling Indexes + +Indexes can be a bottleneck when inserting batches. +MsSQL offers a way to temporarily disable indexes during the copy process. +This can be done using the property: + +`migration.data.indices.disable.enabled` + +This will disable the indexes on a table right before it starts the copy. Once finished, the they will be rebuilt again. + +> **NOTE**: Re-enabling the indexes itself may take quite some time for large tables and this may temporarily slow down and lock the copy process. + +> **NOTE**: Disabling the indexes can have the unwanted side effect that duplicate key inserts won't be detected and reported. Therefore only do this if you are sure that no duplicates are around. diff --git a/docs/performance/performance_architecture.png b/docs/performance/performance_architecture.png new file mode 100644 index 0000000..4164152 Binary files /dev/null and b/docs/performance/performance_architecture.png differ diff --git a/docs/performance/template_for_scheduled_operational_activity.docx b/docs/performance/template_for_scheduled_operational_activity.docx new file mode 100644 index 0000000..8417a4c Binary files /dev/null and b/docs/performance/template_for_scheduled_operational_activity.docx differ diff --git a/docs/security/SECURITY-GUIDE.md b/docs/security/SECURITY-GUIDE.md new file mode 100644 index 0000000..eb01d01 --- /dev/null +++ b/docs/security/SECURITY-GUIDE.md @@ -0,0 +1,85 @@ +# SAP Commerce DB Sync - Security Guide + +Before you proceed, please make sure you acknowledge the security recommendations below: + +## VPN access to the source database is mandatory + * The data transfer over a non-authenticated JDBC channel can lead to illegitimate access and undesired data leak or manipulation. + * Therefore, access to the database through a VPN is mandatory to block unauthorised access. + * To setup a VPN connection, use the VPN self-service functionality provided by SAP Commerce Cloud Portal. + +## Transmission of data over non-encrypted channel + + * It is mandatory to enforce TLS on the source DB server. + * It is mandatory to enforce the usage of TLS v1.2 or v1.3, and to support only strong cipher suites. + +## Accounts and Credentials + + * Use a dedicated read only database user for the data migration on the source database. Don't forget to remove this user once the migration activities are finished. + * Use a dedicated HAC account during the migration. Create the account on both the source and target system. Remove the account once the migration activities are finished. + * 'Users' table will be overwritten during the migration. Reset admin users's passwords after the migration. + +## System Availability + + * The data migration increases the load on the source infrastructure (database), therefore it is mandatory to stop the applications on the source environment. + * This is especially the case if you run multiple migrations in parallel. For that reason, be sure to avoid multiple migrations concurrently. + * When using the staged approach, you could end up with many staged copies in the target database, which can impact the availability of the target database. Therefore the number of staged copies is limited to 1 by default (See property: `migration.ds.target.db.max.stage.migrations`) + +## Cleanup + +It is mandatory to leave the system in a clean state: + * Remove the migration extensions after the migration. This applies to all environments once you have finished the migration activities, including the production environment. + * Delete the tables that are resulting from the staged migrations and not required for the functioning of the application. + * You may want to use the following to support cleanup: [Support Cleanup](../support/SUPPORT-GUIDE.md) + * Be aware that it eventually is your responsibility, what data is stored in the target database. + + +## Audit and Logging + +All actions triggered from Commerce DB Sync will be logged: + * validate data source + * preview schema migration + * create schema script + * execute schema script + * run migration + * stop migration + +The format is: `CMT Action: - User: - Time: ` + +Example: + +``` +CMT Action: Data sources tab clicked - User:admin - Time:2021-03-10T10:27:29.675351 +CMT Action: Validate connections button clicked - User:admin - Time:2021-03-10T10:27:32.258041 +CMT Action: Validate connections button clicked - User:admin - Time:2021-03-10T10:27:36.223859 +CMT Action: Schema migration tab clicked - User:admin - Time:2021-03-10T10:27:38.188141 +CMT Action: Preview schema migration changes button clicked - User:admin - Time:2021-03-10T10:27:40.492816 +Starting preview of source and target db diff... +.... +CMT Action: Data migration tab clicked - User:admin - Time:2021-03-10T10:28:31.993621 +CMT Action: Start data migration executed - User:admin - Time:2021-03-10T10:28:33.710384 +0/1 tables migrated. 0 failed. State: RUNNING +173/173 processed. Completed in {223.6 ms}. Last Update: {2021-03-10T09:28:34.153} +1/1 tables migrated. 0 failed. State: PROCESSED +Migration finished (PROCESSED) in 00:00:00.296 +Migration finished on Node 0 with result false +DefaultMigrationPostProcessor Finished +Finished writing database migration report +``` + +## Security of the external database + +For the use case for which the customer replicates data across its own database, due deligence is required to secure the external database: +* Customer should secure the customer DB with proper configuration so that it doesn’t lead to DOS or buffer overflow attack. +* Dedicated user who would be having access to external DB should have minimum privilege. +* Personal data should clean up external DB after the retention period is reached as per GDPR. +* Customer needs to be aware of the size of the data they are migrating and need to manage the limits of the DB accordingly. + +## Reporting + +In order to be able to track past activities, the tool creates reports against the following actions: + + * SQL statements executed during schema migration (file name: timestamp of execution); + * Summary of the migration copy process (file name: migration id) + +The reports are automatically written to the hotfolder blob storage ('migration' folder). +Sensitive data is not written to the reports (i.e.: passwords). diff --git a/docs/support/SUPPORT-GUIDE.md b/docs/support/SUPPORT-GUIDE.md new file mode 100644 index 0000000..d83ab96 --- /dev/null +++ b/docs/support/SUPPORT-GUIDE.md @@ -0,0 +1,79 @@ +# Commerce Database Sync - Support Guide + +Here you can find some guidelines for members of Cloud Support Team. + +## Staged migration approach + +In order to display a summary of source and target schemas, you can utilize following groovy script: + +``groovy/MigrationSummaryScript.groovy`` + +Or else copy the script from here: + +``` +package groovy + +import de.hybris.platform.util.Config +import org.apache.commons.lang.StringUtils + +import java.util.stream.Collectors + +def result = generateMigrationSummary(migrationContext) +println result +return result + +def generateMigrationSummary(context) { + StringBuilder sb = new StringBuilder(); + try { + final String sourcePrefix = context.getDataSourceRepository().getDataSourceConfiguration().getTablePrefix(); + final String targetPrefix = context.getDataTargetRepository().getDataSourceConfiguration().getTablePrefix(); + final String dbPrefix = Config.getString("db.tableprefix", ""); + final Set sourceSet = migrationContext.getDataSourceRepository().getAllTableNames() + .stream() + .map({ tableName -> tableName.replace(sourcePrefix, "") }) + .collect(Collectors.toSet()); + + final Set targetSet = migrationContext.getDataTargetRepository().getAllTableNames() + sb.append("------------").append("\n") + sb.append("All tables: ").append(sourceSet.size() + targetSet.size()).append("\n") + sb.append("Source tables: ").append(sourceSet.size()).append("\n") + sb.append("Target tables: ").append(targetSet.size()).append("\n") + sb.append("------------").append("\n") + sb.append("Source prefix: ").append(sourcePrefix).append("\n") + sb.append("Target prefix: ").append(targetPrefix).append("\n") + sb.append("DB prefix: ").append(dbPrefix).append("\n") + sb.append("------------").append("\n") + sb.append("Migration Type: ").append("\n") + sb.append(StringUtils.isNotEmpty(dbPrefix) && + StringUtils.isNotEmpty(targetPrefix) && !StringUtils.equalsIgnoreCase(dbPrefix, targetPrefix) ? "STAGED" : "DIRECT").append("\n") + sb.append("------------").append("\n") + sb.append("Found prefixes:").append("\n") + + Map prefixes = new HashMap<>() + targetSet.forEach({ tableName -> + String srcTable = schemaDifferenceService.findCorrespondingSrcTable(sourceSet, tableName); + String prefix = tableName.replace(srcTable, ""); + prefixes.put(prefix, targetSet.stream().filter({ e -> e.startsWith(prefix) }).count()); + }); + prefixes.forEach({ k, v -> sb.append("Prefix: ").append(k).append(" number of tables: ").append(v).append("\n") }); + sb.append("------------").append("\n"); + + } catch (Exception e) { + e.printStackTrace(); + } + return sb.toString(); +} + + +``` + +It prints information as follows: +* total number of all tables (source + target) +* number of source tables +* number of target tables +* prefixes defined as property (source, target & current database schema) +* all detected prefixes in target database with the number of tables + +You can use this information to remove staged tables generated during data migration process. + + ![support_groovy_preview.png](support_groovy_preview.png) diff --git a/docs/support/support_groovy_preview.png b/docs/support/support_groovy_preview.png new file mode 100644 index 0000000..5b9582e Binary files /dev/null and b/docs/support/support_groovy_preview.png differ diff --git a/docs/troubleshooting/TROUBLESHOOTING-GUIDE.md b/docs/troubleshooting/TROUBLESHOOTING-GUIDE.md new file mode 100644 index 0000000..93409fc --- /dev/null +++ b/docs/troubleshooting/TROUBLESHOOTING-GUIDE.md @@ -0,0 +1,135 @@ +# SAP Commerce DB Sync - Troubleshooting Guide + +## Duplicate values for indexes + +Symptom: + +Pipeline aborts during copy process with message like: +``` +FAILED! Reason: The CREATE UNIQUE INDEX statement terminated because a duplicate key was found for the object name 'dbo.cmtmedias' and the index name 'cmtcodeVersionIDX_30'. The duplicate key value is (DefaultCronJobFinishNotificationTemplate_de, ). +``` + +Solution: + +This can happen if you are using a case sensitive collation on the source database either at database level or table/column level. +The commerce cloud target database is case insensitive by default and will treat values like 'ABC'/'abc' as equal during index creation. +If possible, remove the duplicate rows before any migration activities. In case this is not possible consult Support. + +> **Note**: Mysql doesn't take into account NULL values for index checks. SQL Server does and will thus fail with duplicates. + +## Migration fails for unknown reason + +Symptom: + +If you were overloading the system for a longer period of time, you may encounted one of the nodes was restarting in the background without notice. + + +Solution: + +In any case, check the logs (Kibana). +Check in dynatrace whether a process crash log exists for the node. +In case the process crashed, throttle the performance by changing the respective properties. + + +## MySQL: xy table does not exist error + +Symptom: + +`java.sql.SQLSyntaxErrorException: Table '' doesn't exist` +even though the table should exist. + +Solution: + +This is a changed behaviour in the driver 8x vs 5x used before. In case there are multiple catalogs in the database, the driver distorts the reading of the table information... + +... add the url parameter + +`nullCatalogMeansCurrent=true` + +... to your JDBC connection URL and the error should disappear. + +## MySQL: java.sql.SQLException: HOUR_OF_DAY ... + +Symptom: + + +``` +Caused by: java.sql.SQLException: HOUR_OF_DAY: 2 -> 3 +at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129) ~[mysql-connector-java-8.0.19.jar:8.0.19] +at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97) ~[mysql-connector-java-8.0.19.jar:8.0.19] +at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:89) ~[mysql-connector-java-8.0.19.jar:8.0.19] +at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:63) ~[mysql-connector-java-8.0.19.jar:8.0.19] +at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:73) ~[mysql-connector-java-8.0.19.jar:8.0.19] +at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:85) ~[mysql-connector-java-8.0.19.jar:8.0.19] +at com.mysql.cj.jdbc.result.ResultSetImpl.getTimestamp(ResultSetImpl.java:903) ~[mysql-connector-java-8.0.19.jar:8.0.19] +at com.mysql.cj.jdbc.result.ResultSetImpl.getObject(ResultSetImpl.java:1243) ~[mysql-connector-java-8.0.19.jar:8.0.19] +``` + +Solution: + +Known issue on MySQL when dealing with time/date objects. Workaround is to add... + +`&useTimezone=true&serverTimezone=UTC` + +...to your source connection string. + + +## Backoffice does not load + +Symptom: + +Backoffice does not load properly after the migration. + +Solution: + +- use F4 mode (admin user) and reset the backoffice settings on the fly. +- browser cache reload + +## Proxy error in Hac + +Symptom: + +Hac throws / displays proxy errors when using migration features. + +Solution: + +Change the default proxy value in the Commerce Cloud Portal to a higher value. +This can be done on the edit view of the respective endpoint. + +## MSSQL: Boolean type + +The boolean type in MSSQL is a bit data type storing 0/1 values. +In case you were using queries including TRUE/FALSE values, you may have to change or convert the queries in your code to use the bit values. + +## Sudden increase of memory + +Symptom: + +The memory consumption is more or less stable throughout the copy process, but then suddenly increases for certain table(s). + +Solution: + +If batching of reading and writing is not possible due to the definition of the source table, the copy process falls back to a non-batched mechanism. +This requires loading the full table in memory at once which, depending on the table size, may lead to unhealthy memory consumption. +For small tables this is typically not an issue, but for large tables it should be mitigated by looking at the indexes for example. + +## Some tables are copied over very slowly + +Symptom: + +While some tables are running smoothly, others seem to suffer from low throughput. +This may happen for the props table for example. + +Solution: + +The copy process tries to apply batching for reading and writing where possible. +For this, the source table is scanned for either a 'PK' column (normal Commerce table) or an 'ID' column (audit tables). +Some tables don't have such a column, like the props table. In this case the copy process tries to identify the smallest unique (compound) index and uses it for batching. +If a table is slow, check the following: +- ID or PK column exist? +- ID or PK column are unique indexes? +- Any other unique index exists? + +If the smallest compound unique index consists of too many columns, the reading may impose high processing load on the source database due to the sort buffer running full. +Depending on the source database, you may have to tweak some db settings to efficiently process the query. +Alternatively you may have to think about adding a custom unique index manually. diff --git a/docs/user/SUPPORT-DELETE-GUIDE.md b/docs/user/SUPPORT-DELETE-GUIDE.md new file mode 100644 index 0000000..0c6c63e --- /dev/null +++ b/docs/user/SUPPORT-DELETE-GUIDE.md @@ -0,0 +1,89 @@ +# Commerce Database Sync - Deletion Support + +SAP Commerce DB Sync does support deletions. It it can be enabled for the transactional table using two different approaches: +- Default Approach using After Save Event Listener +- Alternative approach using Remove Interceptor + +## Approached for deletions + +### Default Approach using After Save Event Listener + +[After Save Event Listener](https://help.sap.com/viewer/d0224eca81e249cb821f2cdf45a82ace/2011/en-US/8b51226d866910149803df2610bb39a5.html), which will be enabled for the limited type code and only in a constraint violation. +* Activate the After save listener by defining the implementation of AfterSaveListener interface. + +``` + + + + + ``` + +* Configurable property for the list of type codes where we should manage deletion +``` + # Provide the typecodes in comma seperated + migration.data.incremental.deletions.typecodes=4,30 + migration.data.incremental.deletions.typecodes.enabled=true +``` +* Dedicated item type for deleted records (separate table with PK). +``` +For now, it is supported through **ItemDeletionMarker**. +``` +* Deletion activity is tied with incremental to avoid duplicates. + +**Disclaimer**: It will not support for the direct deletions via DB or JDBC Template. When SAP Commerce server is stopped, events published in the queue and that have not been handle by **AfterSaveEventPublisher** threads are lost. + +#### Technical Concept + +##### Publish after save event +When a transaction is committed, an after save event is either added to a blocking queue in case of asynchronous mode or notify directly After Save Listeners in case of synchronous mode. +![Publish after save event](after_save_listener_1.png) + +##### Event handling when event sent asynchronously + +The pool of **AfterSaveEventPublisherThread** is managed by **DefaultAfterSaveListenerRegistry**, each thread drains the blocking queue of after send event and call After Save Listeners. + +![Event Handle asynchronously](after_save_listener_2.png) + +Here are the **DefaultAfterSaveListenerRegistry** tuning parameters: +``` +core.aftersave.async=true //default true (asynchronous mode) +core.aftersave.interval=200 //sleep time in ms +core.aftersave.batchsize=1024 //draining batch size +core.aftersave.queuesize=1024 //maximum elements in the queue before blocking +``` + +### Alternative approach using Remove Interceptor + +**Note**: It is disabled by default, only enable if you face difficulties with _after save listener_ approach. + +Remove Interceptor which will be enabled for the limited ItemTypes and only in a constraint violation. +* Activate the Delete Interceptor by defining an InterceptorMapping for each tracked item type. + +``` + + + + ``` + +* Configurable property for the list of type codes where we should manage deletion +``` + # Provide the itemType for deletions + migration.data.incremental.deletions.itemtype=Media,Employee + migration.data.incremental.deletions.itemtypes.enabled=true +``` +* Dedicated item type for deleted records (separate table with PK). +``` +For now, it is supported by ItemDeletionMarker. +``` +* Deletion activity is tied with incremental to avoid duplicates. + +**Disclaimer**: Deletions will work with SL (legacy sync, legacy Impex, Service Layer Direct) +## When to use + +* Not required to be enabled for all the tables and few use-cases could be considered + - In case of Constraint validation failure + - Deletion is triggered by application, e.g. removing the entry from a cart. +* Don't enable for audit table or task logs +* It is covering deletions and migration together to avoid constraint validation. +* It can be toggle through properties. diff --git a/docs/user/USER-GUIDE-DATA-MIGRATION.md b/docs/user/USER-GUIDE-DATA-MIGRATION.md new file mode 100644 index 0000000..b559c4e --- /dev/null +++ b/docs/user/USER-GUIDE-DATA-MIGRATION.md @@ -0,0 +1,213 @@ +# Commerce DB Sync - User Guide for Data Migration + +Data migration use case is useful to migrate from SAP Commerce onPrem to SAP Commerce Cloud. + +![architecture overview for data migration from SAP Commerce OnPrem to SAP Commerce Cloud](data_migration_architecture.png) + +The tool can be used with three different approaches for migration: +1. Staged copy approach: this method allows you to use the SAP Commerce prefix feature to create a separate staged copy of your tables in the database. This way, while migrating, you can preserve a full copy of the existing database in your SAP Commerce Cloud subscription and migrate the data on the staged tables. When you are satisfied with the data copied in the staged tables, you can then switch the prefixes to shift your SAP Commerce installation to the migrated data. The main difference with the direct copy approach is in the configuration and usage of the prefixes within the extensions, and in terms of the cleanup at the end of the migration; +2. Direct copy approach: this method directly overwrites the data of your database in SAP Commerce Cloud; +3. Incremental approach: can be used after each of the previous approaches, to incrementally migrate some selected data. Please check the [Configure incremental data migration](../configuration/CONFIGURATION-GUIDE.md) section + +You can see more details below at [How to choose the best approach for my migration](#How-to-choose-the-best-approach-for-my-migration) + +## Prerequisites +Carefully read the prerequisites and make sure you meet the requirements before you commence the migration. Some of the prerequisites may require code adaptations or database cleanup tasks to prepare for the migration, therefore make sure you reserve enough time so that you can adhere to your project plan. + +Before you begin, ensure you have met the following requirements: + +* Your code base is compatible with the SAP Commerce version required by SAP Commerce Cloud (at minimum). +* The code base is exactly the same in both target and source systems. It includes: + * platform version + * custom extensions + * set of configured extensions + * type system definition as specified in \*-items.xml +* The attribute data types which are database-specific must be compatible with the target database +* Orphaned-types cleanup has been performed in the source system. Data referencing deleted types has been removed. +* The target system is in a state where it can be initialized and the data imported +* The source system is updated with the same \*-items.xml as deployed on the target system (ie. update system has been performed) +* The connectivity to the source database from SAP Commerce Cloud happens via a secured channel, such as the self-serviced VPN that can be created in SAP Commerce Cloud Portal. It is obligatory, and the customer's responsibility, to secure the data transmission +* Old type systems have been deleted in the source system +* A check for duplicates has been performed and existing duplicates in the source database have been removed +* The task engine has been disabled in all target nodes (cronjob.timertask.loadonstartup=false) + + +# Limitations + +* The tool only copies over table data. Any other database features like 'views', stored procedures', 'synonyms' ... will be ignored. +* Only the database vendors mentioned in the Compatibility section are supported + +## Install the extensions + +To install SAP Commerce DB Sync, follow these steps: + +Add the following extensions to your localextensions.xml: +``` + + +``` + +> **NOTE**: For SAP Commerce Cloud make sure the extensions are actually being loaded by the manifest.json + +Make sure you add the source db driver to **commercedbsync/lib** if necessary. + +## Configure the extensions +Configure the extensions as needed in your **local.properties**. See the [Property Configuration Reference](../configuration/CONFIGURATION-REFERENCE.md). + +At least you have to configure the connection to your source database. Here is an example for mysql: + +``` +migration.ds.source.db.driver=com.mysql.jdbc.Driver +migration.ds.source.db.url=jdbc:mysql://[host]:3600/localdev?useConfigs=maxPerformance&characterEncoding=utf8&useTimezone=true&serverTimezone=UTC&nullCatalogMeansCurrent=true +migration.ds.source.db.username=[username] +migration.ds.source.db.password=[pw] +migration.ds.source.db.tableprefix= +migration.ds.source.db.schema=localdev +``` + +> **NOTE**: If you are not running in SAP Commerce Cloud (i.e locally) make sure the target database is MSSQL. + +## Build and start the platform + +Build and start the on-premise SAP Commerce platform. + +For a local installation: +``` +> ant all initialize && ./hybrisserver.sh +``` + +On SAP Commerce Cloud: + +* Trigger a build and deploy to the respective environment with initialization (if not yet done). + +For Staged copy approach, each table prefix requires an initialization first. Imagine the following example scenario: + +* Commerce runtime uses the prefix 'cc' +* Data is being migrated to the prefix 'cmt' + +For this, the system has to be initialized twice: +1. ```db.tableprefix = cc``` for first initialization +2. ```db.tableprefix = cmt``` for second initialization + +Once finished, use the following properties to control which prefix is for the commerce runtime and which prefix is used to copy data to: +``` +migration.ds.target.db.tableprefix = +db.tableprefix = +``` +## Proxy Timeout +Some operations in the admin console may take more time to execute than the default proxy timeout in SAP Commerce Cloud allows. +To make sure you don't run into a proxy timeout exception, please adjust the value in your endpoint accordingly: + +![proxy_timeout](proxy_timeout.png) + +A value between 10 and 20 minutes should be safe. However, it depends on the components and systems involved. + +> **IMPORTANT**: make sure to revert the value to either the default value or a value that suits your needs after completion of the migration. + +## Establish a secure connection +It is mandatory to establish a secure connection between your on-premise and SAP Commerce Cloud environments. To do so, you can use the [self-service VPN feature from the SAP Cloud Portal](https://help.sap.com/viewer/0fa6bcf4736c46f78c248512391eb467/v2005/en-US/f63dfaed22d949ed9aadbb7835584586.html). + +## Data Source Validation +After having established your secure connectivity, validate the source and target database connections. For this, open the HAC and go to Migration->Data Sources + +![hac validate](hac_validate_ds.png) + +## Check Schema Differences +Check if there are any schema differences. For this, open the HAC and go to Migration->Schema Migration. By clicking the "Preview Schema Migration Changes" you will see a list of schema differences if any. + +![hac schema diff prev](hac_schema_diff_prev.png) + +In case there are schema differences switch to the right tab and generate the sql script to adjust the target schema. + +![hac schema diff exec](hac_schema_diff_exec.png) + +Make sure to review the script and execute it if you think it is all fine. + +## Copy Schema +After you have analysed all the schema differences and understood what data you want to migrate, you can use the "Migrate Schema" button to modify the target SAP Commerce Cloud schema and make it equivalent to the source schema. Please note, this operation executes the following in the target schema: +* Create tables +* Add/drop columns to existing tables + +In the event of a staged copy approach, the system detects how many stage tables already exist in the target database. If this number exceeds pre-defined config parameter value: +`migration.ds.target.db.max.stage.migrations` (by default set to 5), +there will be numerous queries generated that removes all tables and indexes corresponding with the identified staged tables. Please note, that current schema stays untouched so as to not disrupt your system. +When the system does not detect any more stage tables, you should see queries creating new tables respectively. + +> **NOTE**: no changes are made to the source database. + +## Start the Data Migration +Start the data migration. For this, open the HAC and go to Migration->Data Migration. Click on "Copy Source Data" to start the migration. + +![hac data migration](hac_migrate_data.png) + +The migration progress will be displayed in the HAC. It also shows some useful performance metrics: +* Current memory utilisation +* Current cpu utilisation +* Current DTU utilisation (if available) +* Current source and target db pool consumption +* Current I/O (rows read / written) + +Check the console output for further migration progress information, i.e.: +``` +... +INFO [hybrisHTTP7] [CustomClusterDatabaseCopyScheduler] Node {0}. Migration-ID {1814262805}. {mediaformatlp->mediaformatlp} finished in {513.6 ms}... +INFO [hybrisHTTP7] [CustomClusterDatabaseCopyScheduler] Node {0}. Migration-ID {1814262805}. {bundletemplatestatus->bundletemplatestatus} finished in {440.8 ms}... +INFO [hybrisHTTP7] [CustomClusterDatabaseCopyScheduler] Node {0}. Migration-ID {1814262805}. {cxusertosegment->cxusertosegment} finished in {1.644 s}... +INFO [hybrisHTTP7] [CustomClusterDatabaseCopyScheduler] Node {0}. Migration-ID {1814262805}. {triggerscj->triggerscj} finished in {410.8 ms}... +INFO [hybrisHTTP7] [CustomClusterDatabaseCopyScheduler] Node {0}. Migration-ID {1814262805}. {droolskiebase->droolskiebase} finished in {303.5 ms}... +INFO [hybrisHTTP7] [CustomClusterDatabaseCopyScheduler] Node {0}. Migration-ID {1814262805}. {306 of 306} tables migrated... +INFO [hybrisHTTP7] [CustomClusterDatabaseCopyScheduler] Node {0}. Migration-ID {1814262805}. Tables migration took {25.57 s}... +``` +> **NOTE**: The process will only take into consideration the intersection of the source and target tables. Tables that are in the source schema, but not in the target schema (and vice versa), will be ignored. + +## Verify the migrated data +Check the UI to verify that all tables have been copied successfully. All the logs from the migration will be available in the Kibana interface. At the end of the data copy, you can find a report in the HAC and there will be a button available, enabling you to download the report. + +![Report Blob Storage](hac_report.png) + +## Start the Media Migration +While you are migrating the database, use the process described in the [azcopy cxworks](https://www.sap.com/cxworks/article/508629017/migrate_to_sap_commerce_cloud_migrate_media_with_azcopy) article to migrate your medias. + +## Perform update running system +After both the database and media migrations are completed, perform an update running system. Do not skip this step as this is fundamental for the correct SAP Commerce Cloud functioning. + +### Direct approach +For a local installation: +``` +> ant updatesystem +``` + +On SAP Commerce Cloud: + +Execute a deployment with data migration mode "Migrate Data" (equivalent to system update). + +### Staged approach + +If you were using the staged approach, simply navigate to your properties file and invert the two prefixes configured at the beginning: +``` +migration.ds.target.db.tableprefix = +db.tableprefix = +``` +Execute a deployment with data migration mode "Migrate Data" (equivalent to system update). + +If you want to remove the set of pre-existing tables, you can: +* generate SQL schema scripts once again (the system will detect some staged tables, see property 'migration.ds.target.db.max.stage.migrations'). You can review such a script and run it by clicking the "Execute script" button; +* open a ticket to request SAP to remove such tables. + +## Test the migrated data + +Execute thorough testing on the migrated environment to ensure data quality. The data is copied in a one-to-one method, from the source database. So there might be some adjustments needed after the copy process. Some examples of adjustments can be, data sorted in the database that refers to particular parts of the infrastructure that might have changed in SAP Commerce Cloud (e.g. Solr references, Data Hub references, etc...). Also the passwords are migrated in a one-to-one fashion, if in your source system you have changed the default encryption key. Please reference the section "Key Management and Key Rotation" of [this guide](https://help.sap.com/viewer/d0224eca81e249cb821f2cdf45a82ace/2005/en-US/8b2c75c886691014bc12b8b532a96f58.html) to align it in SAP Commerce Cloud. + +## How to choose the best approach for my migration + +Staged copy approach: +* By having a separate migration prefix, the code base in SAP Commerce Cloud differentiates itself with the one on-premise by allowing you to be more flexible in terms of executing an upgrade and migration at the same time. This is only valid until you switch the prefixes, so be sure to have executed thorough testing before doing this; +* The original prefix tables can always be used as a safe rollback in case of issues; +* This approach is recommended to ensure that you do not lose any data in the target system + +Direct copy approach: +* This approach can be used when you are very sure about the successful execution of the data migration and are ok with potentially having to re-initialize the system in case of problems. + +Incremental approach: +* This should be used when the requirement for a short cutover time is critical and, for the tables that are not selected as part of the incremental migration, it is acceptable to either introduce some data freeze or ignore data changes after the initial bulk copy; +* This approach can be used on top of both the staged copy and direct copy approach. diff --git a/docs/user/USER-GUIDE-DATA-REPLICATION.md b/docs/user/USER-GUIDE-DATA-REPLICATION.md new file mode 100644 index 0000000..74a87f3 --- /dev/null +++ b/docs/user/USER-GUIDE-DATA-REPLICATION.md @@ -0,0 +1,139 @@ +# Commerce DB Sync - User Guide for Data Replication + +It allows you to synchronize the data you select single-directionally (from CCV2) across an external database (either hosted on-Premise or on Public Cloud). + +This external database can then be used for analytics and reporting purpose. + +![architecture overview for data sync between SAP Commerce Cloud to an external database](data_replication_architecture.png) + +It provides the following features: + +* The Sync Schema describes which data(table/items) is being synchronized. +* The Sync Direction is only single-directional, which is from CCV2 to Onprem or another cloud MS Database. +* The Sync Interval describes how often synchronization occurs. + +## Methodology for Data Sync + +* Identify the tables you would like to sync (limit to the minimum that is required and avoid large tables when possible). Ex: do not sync task logs! +* Define a strategy to manage deletion if required, see **Support for deletion** section +* Remove/Add indexes that are not necessary in the target db +* Create indexes on last modified timestamp for tables that supports incremental +* Full data migration with all tables +* Run incremental regularly (example every hour) +* Reconfigure full data migration cronjob to sync tables with no last modified timestamp +* Run full data migration cronjob regularly and during low activity (e.g. every day, 3PM) - this is to resolve any potential integrity issue during the regular incremental +* Ensure data migration cronjobs are running on the read-only database + +## Limitations to consider with Data Sync + +The following limitations should be considered when implementing Commerce DB sync for data sync use case: +* Not all Commerce tables contain last modified timestamp + * It will usually the case in SAP Commerce for system tables (ex: PROPS, AUDIT_LOGS,…) + * There is an additional feature in SAP Commerce DB sync to support incremental migration for ***LP*** system table. + * Other system tables will be usually relatively small and can be full sync each time + * You should ensure you do not have any large table that does not contain timestamp +* Incremental update may create data integrity issue between two runs + * During the cronjob execution, you may have some updates or creations that are not sync + * There is no guaranty that some tables with relations may be partially sync, example Orders vs Orders items, + * This should be acceptable for reporting or analytics use cases but it should be take into consideration for the application using the destination DB) +* Sync of master data (tables without last modified timestamp) may be delayed (e.g. 24 hours) +* Particular challenges comes with deletion managemement with incremental data sync and a strategy needs to be defined, see **Support for deletion** section +* Performance should be tested to tune batch size and number threads (memory and CPU on the application server) + +## Support for deletion + +SAP Commerce DB Sync does support deletions. It it can be enabled for the transactional table using two different approaches: +- Default Approach using After Save Event Listener +- Alternative approach using Remove Interceptor + +See [Deletion Support](./SUPPORT-DELETE-GUIDE.md). + +## Installation and Setup + +### Install SAP Commerce DB Sync on your source system + +- Add the following extensions to your **localextensions.xml**: +``` + + +``` +- Execute system update + +### Configure SAP Commerce DB Sync + +[Configuration Reference](../configuration/CONFIGURATION-REFERENCE.md) To get a high-level overview of the configurable properties by Commerce Database Sync. + +Properties require to reconfigure or readjusted for Data Sync. + +| Property | Mandatory | Default | Description | +|--------------------------------------------------------|-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------| +| migration.ds.source.db.url | yes | | DB url for source connection , default value should be **${db.url};ApplicationIntent=ReadOnly** ApplicationIntent can be adjusted or removed for local testing | +| migration.ds.source.db.schema | no | dbo | DB schema for source connection | +| migration.ds.target.db.driver | yes | ${db.driver} | DB driver class for target connection | +| migration.ds.target.db.username | yes | | DB username for target connection | +| migration.ds.target.db.password | yes | | DB password for target connection | +| migration.ds.target.db.tableprefix | no | ${db.tableprefix} | DB table prefix for target connection | +| migration.ds.target.db.schema | no | dbo | DB schema for target connection | +| migration.data.tables.included | no | | Tables to be included in the migration. It is recommended to set this parameter during the first load of selective table sync, which will allow you to sync directly from HAC along with Schema. Eventually you can do very similar with full migration cron jobs by adjusting the list of tables. | +| migration.data.report.connectionstring | yes | ${media.globalSettings.cloudAzureBlobStorageStrategy.connection} | target blob storage for the report generation, although you can replace with Hotfolder Blob storage ${azure.hotfolder.storage.account.connection-string} | +| migration.data.workers.retryattempts | no | 0 | retry attempts if a batch (read or write) failed. | + +## CronJob Configuration reference Data Sync + +Commerce DB Sync for data replication is managed by Cronjobs which allow you to trigger full and regular sync based on sync interval. + +Following High-level details for the Cronjobs, +#### FullMigrationCronJob +It allows you to sync the full based on the list provided in CronJob settings. +List of attributes/properties can set during Full migration + +| attributes | Mandatory | Default | Description | +|--------------------------------------------------------|-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------| +| migrationItems | yes | | Initially it can be set through impex file, and later it Adjusted through either Backoffice or Impex. You can set list of table with required full sync during initials, later adjust based on business case. | +| schemaAutotrigger | no | false | Adjust this value if you have any Data model changes, it can be changed to true, but it will add delay in every sync. | +| truncateEnabled | yes | false | Allow truncating the target table before writing data which is mandatory for the Full Sync, set **true** for full Sync | +| cronExpression | yes | 0 0/1 * * * ? | Set via impex file | + +#### IncrementalMigrationCronJob +It allows you to sync the delta based on modifiedTS hence tables must have the following columns: modifiedTS, PK. Furthermore, this is an incremental approach... only modified and inserted rows are taken into account. Deletions on the source side are not handled. + +List of attributes/properties can set during incremental migration + +| attributes | Mandatory | Default | Description | +|--------------------------------------------------------|-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------| +| migrationItems | yes | | Initially it can be set through impex file, and later it Adjusted through either Backoffice or Impex. | +| schemaAutotrigger | no | false | Adjust this value if you have any Data model changes, it can be changed to true, but it will add delay in every sync. | +| truncateEnabled | yes | false | Set **false** for incremental sync | +| cronExpression | yes | 0 0 0 * * ? | Set via impex file | +| lastStartTime | yes | | Its updated based last triggered timestamp. Update manually for longer window. | + +_**Note:**_ It's better to create separate Cronjobs for Language(LP) and non Language tables. The frequency of updates for the LP table is much lesser than Non-LP. + +#### Default Impex file +``` +INSERT_UPDATE ServicelayerJob;code[unique=true];springId[unique=true] +;incrementalMigrationJob;incrementalMigrationJob +;fullMigrationJob;fullMigrationJob + +# Update details for incremental migration +INSERT_UPDATE IncrementalMigrationCronJob;code[unique=true];active;job(code)[default=incrementalMigrationJob];sessionLanguage(isoCode)[default=en] +;incrementalMigrationJobNonLP;true; +;incrementalMigrationJobLP;true; + +INSERT_UPDATE IncrementalMigrationCronJob;code[unique=true];migrationItems +;incrementalMigrationJobNonLP;PAYMENTMODES,ADDRESSES,users,CAT2PRODREL,CONSIGNMENTS,ORDERS +;incrementalMigrationJobLP;validationconstraintslp,catalogslp + +INSERT_UPDATE FullMigrationCronJob;code[unique=true];active;job(code)[default=fullMigrationJob];sessionLanguage(isoCode)[default=en] +;fullMigrationJob;true; + +INSERT_UPDATE FullMigrationCronJob;code[unique=true];truncateEnabled;migrationItems +;fullMigrationJob;true;PAYMENTMODES,products + +INSERT_UPDATE Trigger;cronjob(code)[unique=true];cronExpression +#% afterEach: impex.getLastImportedItem().setActivationTime(new Date()); +;incrementalMigrationJobLP; 0 0/1 * * * +;incrementalMigrationJobNonLP; 0 0 0 * * ? +;fullMigrationJob; 0 0 0 * * ? +``` + diff --git a/docs/user/data_migration_architecture.drawio b/docs/user/data_migration_architecture.drawio new file mode 100644 index 0000000..923775b --- /dev/null +++ b/docs/user/data_migration_architecture.drawio @@ -0,0 +1 @@ +7Vxdc5s4FP01eYwHEGD8GMdJOrtpm1nvZtunjAwyZgPIFXJj99evhCUbkHDMxuBklnTawkV8nXt079GVyAW4TtZ3BC4Xn3GA4gvLCNYXYHJhWaZtuew/btlsLZ7lbA0hiQLRaG+YRr+QMBrCuooClJUaUoxjGi3LRh+nKfJpyQYJwS/lZnMcl++6hCFSDFMfxqr17yigC/kWw739E4rChbyz6Y62RxIoG4s3yRYwwC8FE7i5ANcEY7rdStbXKObgSVy2593WHN09GEEpPeaEr/i3z9/h1ebHt98z9GPy5a/lp5tLcZWfMF6JFxYPSzcSAXYVBjbbGbM3WHKjH+MVu+j4ZRFRNF1CnxtfmP+ZbUGTmO2ZbHMexfE1jjHJrwMCiLy5zy9DCX5GhSOu76HZnB1RX0k+HyIUrQsm8Yp3CCeIkg1rIo46QwG34Jstdl/2znOlbVFwnDsSRigIE+4uvceUbQhYm0B8BMYhwSuOXsbAjNLwHs35YxmNINn1ATiTlzUOQjWsQAVMFSsANFiZZmtYWQpW0z/vFLgYWmmAAgHRKzSEcRSmbDvOQa2ycu7wP1pW5j/8DJzSgn37I+xT8VD8RjGcofgBZxGNML+hz1yG2Elj7qqIhZT7SoMkCgJ+9q7BlXhUivk7hDHMMvGO2TOi/kLuyEhiKJRxtJQ5zMvX+5YgiAzdZ+QHUPhxjZMEEeZ7znY35i6eEbYV8q2r5TJmwOZ4V0mUu+t4+ijelM6aYUpxou28wjIWLSY2s0UJSxlX1Yvm1kctDfJDbC9KQoaZz4IxJJRt3SFInkzLW7O/g2UaVhhpnJ4G4iiwy2FDZYWnIUV78dU+xIkJ6w7GdJP6vf9P5X/TGb4vAjivJ9iddCGYx9SG0sUwHAPNdUnCMMzJ+FpNEvP8p5VgLI9a5eztql6wPNULoDUvuIoXJuNaPwSQwoxigl73RWsIWm4ZQUcjFXVKsTUEh8cKxe40oa3Rz13nfE+B5eGPSa8JW9GEwxoi1WvC8/Nj1GvCU2uC42lQowlVVnQqCSQre03Ykf8VTXhuAhxRdHn3mrC5F6qaUPVCp5pQgn42TdgYwaomtFVV3akmtNSCR+eacFdDFZiYI03OH3oDR8XFNtoCBjQY9KE1z/sBXs1EFATjRmqxpK1qFVhFUwYwW+Q3MHcxQk4ggOMZvPP/0Qy+rKTiS41C28Xdkq+s1nw1UlyDghDJrIQJXeAQpzC+2VsLHuIA7tvcY54Bc+M/iNKNmDKCK4rLXmPAks034Zx85zvfGQBX7k/WxaOTTXHvAZGIvT1PxKqrGzovwyvio0MNhcsoJCE6dEUgBtkcvoNkIChmCvdneRJL59j81CtC4KbQYImjlGaFKz9wQ31EcJwiR15tbo5AhVPbB9gzbPcmb4icallwcvOoELEfNp5g2NggShWjkNXpONFSM0Y/TnzbOKF5dqokJ82MY7fDBLW21KlAbQ6gKYVP/ZxcpwIVqGXfx4cvCoR4ReMoZTFRrtIwihLJOBDZdl1EH9mULlbWbbnfknXIF6cM4EsGBj+X6ZMQwPkNqlH81vEcwDsbOyeI0D5epzgt6xIlftZH2e7k324SX/awoasQxNFrda8lisjs/5Hln+ov81htN/pY2s6urgrxSmKtafuWxJ1a8+3Xibxd3DWfQqsoe6AbbHYr82Qxq5d5/0XmNWfAsFL5P3flF6iV37YkXfOVM8OyeAPnLi8CtUDbz62+gzhqa9YidB1H+6V254yjtmbuptM4aqv19P+RagcfrCJre+UA4hiHS7LV9p2odqApyb7X1GwPz52a1dVgX1lkNR4ISqIMKbAV4KjLfzUVkzw6j6H/HObdVzdjXR+IS5UNS5fXDcMzbpU0Wqr9VAs1Ec6Gg8jHaTZYMDiVqkttpq6VBCdgSHWga2pytGR4aXbdkdWW03NELV9e/VoRnqCnq1nmk2ipzceHCaJhkuJ3hURlt4tambJkYnxza+kKc/yht/8+ZcUHP7rcVp5t7YoSrlulhBo0HE3QAG0RwrbfnLRL4J0wgzuvJPA35Gr3yFwtV5KeLle/zVlO76wDzrLel7Pc3lkH6rHvyldqUaOf+T9DUaP6LYFutrfjteJ9TaPLmoZnlGsauo8FOq1pfKwPxZrjPap0OF0Vscv1v/2XmW11rZFVXm9/9q7V3dd/zb+UNEClX2i+WD1RbYft7n+jx7Zutv+9KODmXw== diff --git a/docs/user/data_migration_architecture.png b/docs/user/data_migration_architecture.png new file mode 100644 index 0000000..8863407 Binary files /dev/null and b/docs/user/data_migration_architecture.png differ diff --git a/docs/user/data_replication_architecture.drawio b/docs/user/data_replication_architecture.drawio new file mode 100644 index 0000000..6192017 --- /dev/null +++ b/docs/user/data_replication_architecture.drawio @@ -0,0 +1 @@ +7LxXt6RIsiX8a/qxe6HFIyKAAAIRyOBlFlprza8f/GSWyKpqMXOre+5313dWZp7AAQc3N9u2t7lH/gXl2kOcwqF49Una/AWBkuMvKP8X5P7B0fsXaDm/tcA0jX1ryacy+d72S4NVXun3Ruh761om6fzDhUvfN0s5/NgY912XxssPbeE09fuPl2V98+NThzBPf9dgxWHz+1avTJbiWyuFkL+0S2mZFz89GSbob2fa8KeLv49kLsKk33/VhD7+gnJT3y/fPrUHlzbAej/Z5dt9wt85+/OLTWm3/Cs3vGWCyD7DX/XiOI04pCj5Hf0VRb51s4XN+n3E3992OX8ywd3Nbe37gL2HMIDGuOnXu1d2L8oltYYwBo377QF3W7G0zX0E3x+/d51OS3r83ZeGfzbF7URp36bLdN6XfL/hryjx3YO+O9Bff3KM/ZfZwH8ycfGrmcCp743hdw/If+78FyPdH77b6f/EZug/t9lPlkrCJZyXfkr/ubWysmm4vumnrx7QjIrTOAZGX6a+Tn91JqJwDIf+HPuiJPGDeREI/Z15YegPzAvD+L/LvNgfmJdo7uey0f0hX75G/q0h6+9h/trwxLj2P5346/yFJsx9AUwNxy8nf+rlcSzp1IXgjfifu7zf+VuvPz7pbv7V038z27f9lx8nM2zKvAORcs9Des8bC2apvGGF+X6iLZME3M5O6f2aYfTVFZjUoS+75cumOPsXnAd9rUv/bShfXf/oD13fpb9xnu9Nf4Z3ED96B/oTXPzaO/4o+JB/W+zh/ynn4Pq2Tac7VhGI+4K7P9U//tVJ/Nf96I/QZerXLkmT7571Z8AxjEM/eARO/B6OYfIPPAL9t3kE8c/R+Bc7wP+ZrAUTP5rpZ7P92kz0H8Eq/m+DVfK/o51Q6G/4jwkI+wNLoX9gKezf5lDU7wz1KywAiQKyzi7+HxzkNPLbScGpv5H47+aFRv8A+f/2b3Ng+h/My/cUMP0EwUYTLlk/tf+TZ+nHOYKp3wcO8Z8E4p8C8tfKIbm11PfDflqKPu9vuvX4pfU3ZvnlGrXvh+/zU6XLcn4XhoAK/Th76VEuPrj99rtvR59fneGP7z1/HZw/WB+83D+2/T2Wfp3i9F8AiyWc8nT5pxf+fjan9PbUcvvxTf78uYH/OfinXcIAwQxcvAnnuYx/tPSPU3Vbajr9Xx98fpkFcPiL6b+Ozl9NhP+rGfpvOlvof3G2vt9qADL/S8zSP7KCW7j+2MO3AXy/6ddS/jf9/Cx4f+oIoX7s6NsAf9fRl+/8PJz/gjv9vmog8yz3h/GvhlHa/IHO+b8SS3/5da3ne2d/+Vlt/BM8/QeR8f8sLv+FUkK/Lk3Z3QnqpwIXsF4SzsXPsfiTub5MbdxacSl7YLaoX5a+/QN7LgBcf2/2XyfF7/WL9shBYe9v4T6jf9uG7n/l4ZLu4fn7eoWAU/gt3VH2viEp7y7/fDmK/Nbt/yjl0b9PeSTy75q+/1ipwjW0/1+AfrkA9htuiv6BAv3P8p4/Kkn818uBfwZFhEn6R1uRv7cVif5k0F9bi/p3sXjs93L977N4/n+wJ//155WC75OD/T935H+hQvAn1LWzDPnjunZCRARO/EnWpfCfFet/I+f/R6WF3zr/Ox2a2z2/1xz+x4YB+c+j4D9aT8R+X2b4fRTU6RIXvyeyP7CyL/PNP5rvN7GAUCT+QP4oFrKvn7/8djmAhf4GQbd3ctDfwLoYB4EVAoT7OgF/NSO/aaX/sPWri99eSf+djsmvu+/zf9AJ/Js2IER+uPZrBeO3bLJeo3Tq0ts8fytjQFXZYXp+feDjqe+qPvpTABb5V4qL/9Ha4k9l4D9Rh/9c/yCQX2vqew4o6h/qanBgpFN5jwzgwJ+utb9nk3+qtb+lnT9da8PQ3zD6lx+S/HEdGSHhvyG/wfl/VX5j1G90CP0bf/k3y2/899Wc/6/Kbxz6L07+fy0cf1/H+LuMp2y/dor8c7HdgBNsGNf5V6T+BOpJmoXrV4r/O2p8Hr7J+6w8QHSzXw9kfmqFfmr5Tr5usfjtEBGGLr/RtnRZ/b1Dipj3zP2jWU7xcPL7kwsOOYtjPuCDveGqBT4wvma9oSczzVhMmPexVDTeo2ltB2IUcB+TMyzzCBnGAQcs+MfIH9Gvj9/5437Y6/vx8/6TDAxj/nze/urn6/lfjV+nZOb7eT2/j/nv7fcxl9+PfPx8nXsf7786lp9fH7u/IOyEL1+fm90Smuv+4Mn3FULpsJb7rKlvHcrvh+CkjLRsfrvLSUKjaLZu3n17kpmOmB81czDMR2Xe7CM3ZWc0H/Xw7MswcIfybVuIGirlffU46Kcy4pfgWnXnOo2LDc/qbv9QfW+N3hWX8qevrSDRPEV7lOKnjqx7jByjPrQmF1sWuT5ZiKLSdgHpfhLZ/QvVyWRBU2p7pZh25cambxtCXERynwvuztv0Icvyo84ti5Wf/9e/y0LuhejccI+23sGGh2sGzId9uMj06/tjQ5yb5uWvIOo/96N96ELA4zuVtlKOYFMUZodsmmxw7v6sGE5vndx79x0Wnkytm6L4PsWHn68naublLGaZL0nV3qb4eoViyJh2EXR4H19OtUrbfX1jnffJj2XtbsnB47Li7SxCdQtHZ6RIq9/Y7X2egB1O2VuqsQ7SSZJK+8i1nqxQJzqLNTODg8tM3bPyh++KBT8aNe3OiYsYVb3PU2jm4gAVKGa9/50VtaEJiHpbH9lnXnKm41cHL4PDAXARFL/1Sdlx95fTLIr4Tl3tJVxnFrwUi1X65sVYvTo7Ioc1EeLcNxQ5HAtPTlT985k81yEjTErZO1WjcexDWAjPiUbDvWVMHp4fPdM7axZ7x2USW1g850zleCO8XM5hdZSsilJe7ItpUOAgA3M/8rFz/c5irMLp3Oy+70SDCDiRQ8Jg+T7UdMwo+YhtU68XG6vnZnTgznRZSS06g7iODVj56IZYfVYOuthcMpW+pz90uA2CT5YT/ynexLpGSxFjBSY1cW8c5+BMWbIShLLar/IjTyrzGMawbQoH2TEOkWBT2iXLWXrCShbB2oaez+iSSMdPJAQjb8odc2c7tiLl1n091tLcOcak97sjPF8fUOFLypP+JE7RhS7+Htvq1UKurszQ5ZKMxNhKG7TB2Y0+vCKODgFQu32PwfV37UMVzLSz6yhLfqN/naWrJvP0vJaVSr0Il53THXfTVtzD2E24mWmBM1s6GcxECmy3SoL4UIA31PEgE672mPLwUeSPrZWiWK/MPiCYcRwrbiyaVsGamsMCSYIehncFflp7DIWdmrIycL46rteQtLcgyRROW4Raw8bUeeT0+aeVywA3VSPtuEGpR1OedA8KhnpmiIhfJDtI1mCmwavNco5IClkPdtEm+FjgU2cguWf6qtIwOrOqMSocvoguygtyas49pfttM+k2CaV25ZQ3rSh74u2Zq1a1DMaFH3rIEgrBh6RpebPyY9HNZz2Pyw1gFSVewlVUihwWYpy4efT0JXnTgtvr34M8v9Pn4pHhbKm164ZevpD54xow+o4c9txNxdLeo36i3PQ8WmZhyJdvSyt7U8mF3IV321SBUcrn66xSaIebibIYqXm37j3Q48yjV/YStY+bC+dDu9qbc7EpiVSIUwh6eQMErPTiMUfi3T4rLZcw1+fVWFOuDtxTR6OoQXQzE3iflJpiVnJqmsoCEzpDcFHrFYVDnwle+9k+rzCfnnLpukXQNg6jjol8v8CNI4tVsuKTo5GGEuAqn9l2if38cAqknVt7eGEX71u6NgDo5UOCY598P21whugUR+RJbOl40Thuk1D8i8amSBvXlTBJJtEL5PPSmaTlX9HHfmcVLfj3KPxW/cyiwLs4TQwOWlimMnnm01eb/cOV+fHQGwN53xfysAa7puR9+FyzirOCaVMeqrOjex7TyjFTtjbEGZ7rO4XEzIyfB9QhbO2BnTQ/s9T+aVSR5Pr3kcnOg6VcH3ear6f5ptw85SjyMMLY35u0JnNhVd2ajCxxJ6SQXZty0YSJSR7rLDuCNt6vIzh8nVCxwUGQNHhjRrKYZGvOKXXaBh3eIygod4Wmdzh9SOTRf4aBWjiIpmpAdZmcQOP57mSk/VGCi7CL0dh1eUgvOUwYFEKrRk5OJgv9qlTj4G0DoX6OuS2Y29ucaqFJXubGDtihi/X7fj7PP1E7/3QL5hF+nnbFyzLpSlW6gYhJpF4+HXunKsG4NAXpyGZQ8zoPPMtkWwD76ThY3YZTq8PpOS6eGwdJa4gwtE8jDP4mhbyagilsSP8Nm/ZbbSrogQQCqvCUlAkuZVylQFKYCwsrV/cgm37MQ8ZwWZCSqvlIgXk4PPRq1YF8PMpp+CIWe2S/HhyUcoqubPwd1oVk8ZjdcaLXDI+a+/IERgnv90u8xrA/j1WdmACAtybod2ZkIVKOuoBuUyOo4D7JZmomXtDNCtiLi/BHHM814RBZhfGBVRvaF6N4YjXa6bpGCEVwUr63h7qlGJvQBDKSDjEHPVxX7OKT7I5uxp/b12A4WGaePGHTNxtgwpZv3sy5PT1tvVGfOzfGAVieC29TkyvxIXnmUbCCOmu3y9i4+hk6axd6kMsDWFOiMnwzlmGssko5DRfuC3eDyIxJLWF/mCRgCPu1huPn5bT88GyiyHxP7jvQH8ebEbvo7KveGjSYc/09gHS1a4khKVXWK0Gm7ezHEcA7mRgFEcCZlScW+kUthEndxSW4Pg99eVFMmPKCmBM2i/Sz+wlDwmIEpUKg+uMGyHGUomrHAHueMeXgJny9bjQV9lPyLyXvRyAkbq9RlLcqVLPJaSfIZUNVbFuXyXo6JmRQY2/LlOWbzohM/6x53t3DEoGqYs3d+eH0xpMfxDK1dThhMKSjncEOFeOxntFobg/YyS79Ad5ApajIngaLRhlukaUSf9F4kGHK7OqJtTrAT2T/vath/EmDqXdupNiRJj0lhJAGCK1QF2mrHeIdxAUU9+7yiM/Eww0Fs2pMe6SJAAYTlIw2btSo5mUdoJO0Esyzf53Bo76fsAelk1u8TGtY6X3u5OWg4ysnX8o9giBUtBeGrkuJq6KWiJ9nJEVLwn5awaHK9eUTMPYypc11zcbL9gEb4IscZBZ7RYXfWzX15WGGqzJiRUz72HKtTkTD8kJ44YmmaDg+ANEpMLe/WKkDs2wA+Hdfxe4MKZ52xOE7jmYNTHGO2D0WFqHja18+d5bqIV+uz9tCKt/eNJtiLMBAWaR25DZGDjNNQ0V+kQdS36NgHjhbroDhHYuLmZXpOnjfXHJN8OuR+ufaiNSHUiABv2m96hIpbTCXKo1duO39R47ILOqzakjcCGRCGJ+aEMK2D4PbvIDfc2Dczda1vYSbfNrHhrzJ1k3WWzROBlJfUQ3VlifM1sO4xHDyQNxb8i5UzA29zFVBL578PHtMei8LSz9q89jpsKFUpf2oFJbgMBerXeLH41z6wHVzx2QIV/RohaXurgT3ImnxJRvu5qvEsw303pegvocEzXbohdeiJOPj2XYva+ynnFLXRYpMEVjjThiCrHM3aV8X02T6L+9nmTFaGyeUB6GTMMi5IaLtw1Fcq8Dny+oUQrQpFZ7e+EeScL6JHdG1v247Zlhl4elZMhWjmmXVz6P6LAcSA/PiFAHZWhuzrSJDZUTL4w+dFUaEQyDtuLF0hO98wEq5Z8PHC4YJAS4tbt2U3jDoYP3Y6Iv0Zj4gug5HXBhmr3OUz3wKOGDM4BnymTE6kdU/obbcPnaGDZH/lhnAeU7YfurN/YkU2dC8LFSB8/a+63Prs85S6LOlTFy/G3S8/VB1oUy5d+Cb4nR8c730xKCwioepVaTG+U4CQOeYzGDvsENJhPs6EZTVnneHbMzGNtBVcECBseDDZhRreMiQelGdwOKE/HkRdHnU8hKJWh9puUsUy8Z/oCXB7JAWCYH77FwkhG93T8FMay5P3L+UN8QDiB/78eO6jIce4ZO43azYRG05c+2lz5Asg7SbnjfRP6h4FyvFORC7pVTsMAazo/c0bLRokgnkHIFy6W3I94Y05cGrcglte10o0pSPFFonVlTpMUnd0yAaw4t9ct1H6PzphPYlN/FIaqYojKLABhmpiNK6NMNmCTG5746hio6+g8QXxq6lmik2I8wmnw19mdU3AvtSvAiiyws79sRBAuTeen58wsxenbnvqTgaCvQ1K46QGl0W6NkxWHBFsMO+thFk3IwEcpFK6/btlA9KnTELDIAeav8I0Ctfb+JvQ7zPRNGcsA/mQCYOkiMTRdX7YXcQHtfGIQy68ZsPa/viow1iZ3eH/o5d4BKYbXkgEAQXSdrwGfQiTF1QxiMqCBiNtkXtrKl6BPO9+H2lASgf1fhb4vCJR8sYnwsXD7UNkOiW0CN0pdKxRPQnO1sa84mh2h5ufQuCWlvIwXwkjxCunklIYfuIKM3Le29kE6Blnb+9cjcf1gJUjBSQ5YeEBPmGOL4+NMQUCy3ZEvfNgpi9rzAzoHCNssBd6b2m4rlG1RMSs500TlaDkasANRM24fPX0+UznnLUwQdu/+ZVxNgMU80Jx+/9OvYhcUbovd/OtG2z9+fJVkUwdcHNdLkZJPsW76XtVoUCcBCTHnvm1Rht6e7xUkI7ceZKDZ9+MXWNnfAEJU+fjHwKlPGIbCobVpJOoceIO/LybLYJvMJSkEuONjruNu9U7ivHb+c4432dNAqzjborMwecEw+vxk9CggfPHrtFxDQD2l7aU336PULLbEs/TxvLvBYk7RFVqWcMgcrYYwzWb9Dm7Ms28/mGOLm6V4XS0nccrFekA77uOcTMpMpCGB7+qK1V+FgPjr+pz9JX11O/qKdG6+YxmDWZFJp/UyAzJKSwipIAaEq4UmZQM6F2tkcbKnq9jjH38jmH8TCUj1ZJMZZDUR68BxoGsKN3+dTET1s0OwY3jMgvZDIktXqnKLnb7x5P1tiPhQnnho9eui60lStiw7A/B3u1xUGowFNNEwo/bTFGnDjMcEQOTru65DG86heybnEcSxdVPdjoMbL5CDycWPQQnigPz0H8uTYH6j96xvSz+nLGT4wN08oRXLJd/aLtrk1jfW7gIevfowLcNBoCShWO+176lq4ghKP+1l/bwpGfwLCqjXbEwiD7IhRdL+luvjvC860ZH157U2LAjwVvVRB1LWEMGcZeE3jPSBrn1czmkrnmEL3QVmMZzXQ2TRCFhLyQEuik98Euyvwpx1U3DtnFA/2NoeUEz4zhKiMRbYfRldF0IaINs4a0FDk5ehgP5B8x1DuoqXEqVXeLcJSsjPKO0i8tCcoLaObKpNUr1VPJ5YJ/woKNRW0VXhGWHh6Kc+3tFJxa9K3QK+ZzcZatDWoE6PAK8K6P3Lau2Lvusp5elH160mqXGRm15YEmReUHMi8q3k2jbBxHfK+EvtIEyE5uaTfvKHKLhXdg7ZYpsApPdjwuYuuGMO6D6oeI0hpB5pKgAPphjhD58Q8shueSKlvz5tkHwpkFYuvS26zaVWM54PQWN2JYG/pq9HZX+FOTb2XGO4WBbhxjcRNqmp2mzjA7pFue4Y7d1USSjQWqeXgSuhJ4Q2ozaXsWbHPntPya4rXgwufwhaPGNnnlh5K9YPWeudQ3L3Kn3uvkaitipYy9NETQGMNOeVW+PNXeRxH8zSAokUMG8dGoEi6Fhwkg1X2bD/QyDDYNB5Ton2L77Jt+8boXFN38PdgMSvmy9P1HzThEnRJkOTNhLT/juK1ca+xDT8iVe89ikIXj9WEhbuledTEWNJSP2fBJ81JL3J5IBbLJoIRGFBqZ9LIZxeLgml7mbHYcvz3BbMJioQMr2A9Nc7ZBqlzq8M07P2t39jIRr0WptkAp7hAvz1z9zgJDaHus49gxqkj59l+DphRQVA1KdoiIZNsL1TUj5w2XPtUA80l8pfV6Xaf+xOTbGGvvuve1J44pU4e2nZehk1VWT1zDBd69tHmX7yzJKcPnTgTsI78k4Mp0bbPwRaTwK9Lcl++pEm0QAYIi/YqMC2HhxVjfJEPAO1q5OJxODOFaFAT0cP8NdSPcNPHAPmFNDVE9Na7CwaYIWVdBYpY4LWBcqkF655kssRSCKg7GF3racCKlAQYBEsGwjLBJjTvDg3EZ85ll/PSSjMp/uvud6RSaNfpDaIaumzJtFno6rTmnJCPEC41bIiTT4jxzDHGRwL3Y/HG0kVq95QIj6K7gDhc/02tSsZvBvJGw7k5alVwyddZz7qLLfAHyN+njEvGP8EXYNdnb4fnGipcOpvMDlUUklyjtZjYHu8utpFzXohq1gqeRv6Ixr3LMvcNKO6OnaN+YQGFx20bVMDLUdHMeE7aHMXkyxPTBerPtXOAh7uGaabZyooSb9oQ5nJOpgozOEGBq6MfmnoC4dexUaje+tXBzoF1imAdjpKi5jBuAYFuFijPGC/ts9+xFDO17qlHALEle7tcXntWqBefmy6eIgsgsBRrKgP+gAxnqJ2bQBqR/q1OZ6RN5AwKUZpHwMjJxwT3DgyzPxO4ECEJ44Tqtd7+RE6emwZ5CASeY9GmsigzRpNUo4yKdua36MA33RNA+HXioIKC1ijRt8hbI44M/7I23AsUG8mDS+7ExlkhatDVzvRNfgSuYsxRgxFZQugRAi6BFTNmqA6EivuCgBOIZhRCWZ0KKKneLichgskx9uVlXWAiajXCLjvuyoBqjLjf7wBxGfDfSR64BRQknTJxxHrCN+e2kBiHsU8jHq4J50kJq5WTcfBWAHtGHSFA9xU7IHExJjCVWrwCeCkRxvVZOyuXx6HCaQOZ3tKIuZoDqRud4PRKUmNgLFjm02aSZqFGgwl5Fcv3CfCxZtRK3N3h/uKdLwHGjcXBipcTWBelcXRn/9lomogHpH9YZn9ZNNpIlg/NnBKHxFXcakIduOLVmVhDmooUB3YPZjraLm5dvzCSPkfaMKEIoC0dwslzrp+ftW/dJlXsgbZGpXF+LFt5A26eyKN/NSl8I+Xs+2VrZCHtO6LUmyfHZ+CaRYuKLqCNz5RTjHYXODEryQtwDj+hf+CXt5EBnF+okJO3rdBkNoftkaFeLHLa22rNR6WtPW1wmrn1LeSxEiiRzKpFYj2xFWebmse+boLCZMysgJmLNEQh6AG/LeS8SJV7X0M3uQaFOZLId9O6st96E4bw9hm4D/Fora7jCYWtk9r3ndGcS61r3120LH5C0edlXDXHQ3K4bajVDW4fW331h0HDG4xQlBsskNklrjKp2a9Em2/geKSxMIsc3t1EqBDhlkcTbMusbas8euUzhQ2Ouh4kngAqOrf8J9k+IASTdOAHE0K1YDCVD+fq95C+vTV8WFFJf/FjTy5DJVr+eNynr9AK2T9QiR9rPigivtZvcSXQIBFpA0ySkYhSlW9ubSGd4tN+n3gHohMMXeWg1NmSITb2zDQjG2WbIr9sKNbjeZ7QZ4HGskupBrCYnHdgXtQA7a8Wx+SHZAGiJXqmavCHXRV4iWEFESb0BnXFZthX7qulbIZHI9ciCDUSxSDw9bM4TVc/N4TzeVkrnmpkvDBTTZREEY0skfccIFhAZzppYs97W2xJQl/X8zEmoiK7tj4SAVHa6E2SrSXjAOL30WXV1zSQITTX8UzLMPTPBkj0WfV6Hm9gIYpe8ijjExmM022lK9JvIL123lxrwdkyu3lsqUMGadtQHOatnlW46HiMlJyopcNa2hSvPMEJMRch68ukXDoUdmZVwnN5pvgCggAvXiePtKzLvHrc5RiSQ3UStCLvRp27+REAKZw+xMWGuiWmZM4ZxdUMpi++qmZjLk6LtUJn4VdkbC56ftgzMy8dI/AALnSLeZyblqdiIVjTknFrPSx9vSzgm8JOmVTKWl3aa0EyA2xYL5gZ+2utAsYDYbd375dsBvRXJVwXkQeLQyoIZSaYXlFpz10Jjub3QgYsdEwe47ETeGIT0VbeMgHwYX20dHp4RQPpgvQFje+12Q2l+hbSVU0zhwHDHnZkk017jtnjkiQh8O1EeHvDqxg4F4sLeBknkypS1L8YtaTqUYKradDOSjlHxYlh8WCs+NHPqcdP60daTUAOstV3h5ohidbJBjr+ufgiN+Rk3LXYFQcyu3aeX0GzVDtaWhqYoqPFY9xs1ucYAY4VixTiOhJbywH/O9DwEMlLwIEn4ht8i9UyiZdnxuUbWeKXXMT5eup09xXrGSvydOrnlLnXzaMmbAdNrZxpGMrcSunCIMBMkXGRywqSzxusVWNTz0PphaPIg8AF0hx5IA2Gk1wVpIS+QiNDmg2mXgYtV/FXriHTJ3C8Bpt3VVVFRbSH3McvqzX22bOXTQXtG+zycCh5Dbs6t2vKcUPn9phvGmeAljDhqsd8oLs9KTRPtELKl6LD09IBXh4yRsdrNNoE/cSQ41zdOSVWFQw1jV3oB9BTphGYgHu4Ywtjchzrt/X0JjbREaPhda79uNtpyD/rO+NVz+ZyB/XnANtbgLVGHoM74BOPjBX8scbPVKZhRx6jhvTWFxiI7IkWK8xwUdcr2wN00kyVDOYXhS+C2zjg+Fj5DzJX2hSUAqZ1SGyDJnmFbDVnr5aq1ap+34NEDcps84VEK2zOJTatjFjSDa1dpbh6eSRUejFsxncsHVgK4pZ1CFKeJUNENLRFlENBII9Apl0NogXyRLvmg4hLYRjSl27uaZr9xEt66c22L9FuLJRuMtNQtNw9pJYN+aEWiI4sr6PYiSDFthfd0vAVmlzAgz/gLBoJ9JZOOB8W9uInepC0haxT1ZXuOwgoh0rBc55VXzDJvWJ5iThF8LtWF4s19B4sLlo0coCyuV6wjeDN1wL7Dx60Xzy6cLUbloGCKtmeGwEwV75GOPuSkrvluORXQL+bQGFBf0XfBi0DIScJ8IHiu8LSpcajzdIbqiw52lDmewRB2VFRzxzX0kIlkx5b1gv4coS14QseD4pSvLQoY/gIM8eRhIO8rwt7GBIjUM1ye5YHQ2LST3JfBd7CErHYzHS+kF72jWGUwCIdymkIAPt6q9QoN7pMbhSHQHW07YGlKIFuq3F+SFr7xOG0++Yca2hoxchlphY2Uho8twB0dI9+KSm0WPgHp66nOuJ0RbNITPPMDRK8uvyQJbAlhY3SC0xiJrE6NOTS4/AZGJHy9hZf+Up228RJ9sLO00SOveh4pB97ObGl1DiTXPr5AMSq97KbfcL6HFSKiT/XW95ibpeorqRp8bJGQyOj1YSa+Cn1umkLTSGpIcSZXDqYWy/NM7BBEN9PCA5+5ejjRJwnTGSe5y4R9bg7IldYDbZG9dTRf+yA8jk9F9bpQoIbgWagVw1vgSTLWmyf0JSXqp38FmZ1lY5/4V5Xxj8jz9whGM8eYmpkpQfY9XolJZ2O7EWDlyOWCiL5z/M1mxnzrVrF3oTWf+tR26anQzzTEPthCR6xP3W9okioyOg9HG6csseFXIjzJCHpMnWYCByf9CzBKcSemsz+SfIAysq1MOCronRj9UJm7gLLTr+0Rvq/AuL2WWkRcMzxcDsqh9pRwfDKZnRoNzfjBOQghSAGe0tkgIuA5cwFoQ9ZWolTekQpWXoQN9Q1XqmphCcszqfUEIaL65qXYp4dgOTPG60i+1mku5P1UKfvc9KfiY2ZtPGogTtOZVek5ge4Bu2AZE4/Dp4MbPoLcfHkeE4IKrXOOENvPsUizWkxaGJr4RJ9g6CwtutMTfyxYP5baaYQwh8BRQ/itfoWTxEFzjFMc+lxCtQRRiwTGhUXKisRozOGz8In9CVbV8XCQe2Z1bjEfzvqor0QsCd7ctW4axaC5qkTYDrZ5CyEw3QTxvXrLc5LaojG5Tu0ifbMPsyMGc0BNoW29GyUOuAp9PhndNUsZ23xjrK9jxBc+DJnx4C34tOCCGAmgiP2l9NY19fdr4aD0wjrHThjOcRh93Nd4tRKDcgYQARTaLTBAnFFbQKi926UVUXf3nbTUdaVucaWbYS8HNXZ+QmiiADzn2DTihsutQSRHI56vnXNUk5s8P5AAabmKDdS7ZOcCefBZMissBLkTiw9l0sVxgxoM7zHjBM7LfsZSH3C1K+qvwQKkkvcPIhXXqKyrWCzJMEtt6kxex0QKhWRfVUm9w0enWF66Ec7o5ebSAf4PIBZ7tJ9HfouzC/MMtHw+OXYMtLFjBxucNpG1AQ/Rqhet5uIqAuCgPgLosdvXd9LtUEcj8B4CviOJoxtho7vdDIsmD8T+GK17+0k541Tt+8NsgN1z50xUM6lQsG8n+sfVBMWn0xZ2K6yJ5ugD0r1YuKs0mLYfrynAP3ELZaPLtovyNfmlbSSYidd2jZvv3fcFtnAeE1AFYEa0b0mKBXXztDnKbjHQyWhYjLkypmPv9ArI+2W9gUYteRua7QaSO/Kgn9vOT7FO0+IHkFkjy9ZC0QkcEXmk5wmf/vYA1PosbKIjvUxxoC5AorK/40kSYgGfI5oMh2ZRp7uVVPR+Ch0+ZJCWY8zjEOo7svooyxAM5Aytnb6wgsEUU0SMr7yxbNHiR2R8E/QCkLJbj+YR63EQ2PilZmGtbbJOkEpp2lh+ZbS0cAQ5Sw8xF7bQVEbBJ+HNateUjNiMJ5poUPzT4bWlkyb1Pck5wJBlKm4YpQANM4pvbGeshtfTUdylgUcyQ19PiVwqZTtf06Yh/pYBbhpvPGkb2XY80/qMXsAHODCha0XEz4QByB260XTGX/2+vhnrBSBWKlYbpSQbIwQ8YVB/qhf7pR5EzPs09eHBN2uEPmAmsGqGEEVEQQp2pDB/PBVO0GpbIdOrYAmmSZXnIBR9B7jL59V5N/VQI5HImEe7PKHs06cTJoZCPeIP11x817nOxcACf5/KYC2wUR2ksRhPQeXPO/1IQNPur+JOSjyKpX7ZdHHmDWQojBGQstMtc8xbUkR7ClfoTmocRQ3nSVy+l84zJJYV6gnmG06CzD+mN5sYN/2VOvDfDbFWCm1bZxzB18LUuc55Ebft8zRVVR5US5fSO6jCOhFNHc6qcQ0+Bv4EDmbRXWkxKyq5CjTwvg22cRkZ1G2nJCTBEHENw1Dcq6roIKUqxYNKqIzFSseVIkwTcwhhndyWZrMKeKF73nu1UQJX6/VMW0Rj8+jjb1ILKx9qhL42oCSD3uaLqgHgOfIUHqMqj/SFUk2otFHLEc/HWFRJrERB52opPH2bWBbShr6Z8rX9ml8FDowxTKhD90FFTTQ0t9huXDAMNO209YJmkQECE7AG4xrH1V3y6mYXNNQOPepqoVEVyABCkrWe2jpCPNhNy5IGMh+HFAnvr7W6sCoD6CKDg4n0IYlVnNKr95qx9gIhopQ889YaU4hAd8NEQ5N+hiRuiwuoVzcGipyDB0gabXPA0fL5/cyZKwmWkSF2lI44rJ8hTaILaZEk2AMy4MPBNBY4+E1GEjQS1wea9NwNw6YxwgkFfJ8skmZP2y8P1g20erpa4iCNjx9vRL3kb6bSiFg7ifqD4N5WElL1zh4kaFeeoKrfeR/6jjZ3hLLxfdzigpjoCSrVDH6ZSHmM6KgPQAf0N4FdWhjbpfHoseFD15lQNa1sOkeva8LjVMmPNKOp8e7JaHrH1Q6UggvyZYh0wayqOGEKY79LSzvIJtt3yFqgSYSkWabdDkkPyrZr0W2QVmOiyJrGfsTskMAAhhjWocOytbq837OD+7HJx4sadCfFKIkY9jvHgC8PCZq3hvpjvjs7AiBnXgQAW98h3A8WQzve+MELvhxSrQHv250Ibcnqk0bYRZUz8dpxYzn88FEXy3FAi8tx7mKyryPTiGB+KFiba0l1asV55+U0Vel3tE6aHQ7QUr9NVsGyh1g3WUvX7J7WylWZIYdJ5gclWiNvT8qIbBQgzZHTU9nCzlcpPM4zQoXRDlSlnUf4QW68J6igbBFec3Lia7E0tD2JpFeQiubWhyhuUexCXaRbe9zzjdQ8sSAhDpCuc5tQK5eFamBc3q4UqrYT3x7nBvCEy6XsCmCSuKkKwVtjvOcL8oGptRV7ukZEax8jI7rmRBqmVDijjL6lyIfCjVHAl0DUPa3KeqZh6ye7+08SqlT8Je42Pj2Ry96jQzGNrqiA1DzTYakZ7JKWoqN7+1xeHC6nr5PUPiKMgjnZoFYJQ6DGHvwKu218XtSJpLC9xeOBslz6+ETPjhZyAmWTcW1zI1G9q1/YTLrFHiEVaFo4iQ/ADU0wbbsQCDrfSxx2uReI82KZb09WsMi7QcddQb7ONqkwHvho0lfv+HYPD5nLmY5WbzeOiwzdP1s6EkFSQiOZNOfH67lZ4f1YBR1n4FzjM3xI8IfuTeDY49Q3+0D3xWHfCQM26Qcqd/JgaBpG3pAOrthNpTU603Zk6rWSZpfARQ5QBLCdd3fRNh+j474pewPYMXwDos5BfEMz+lkAYj1Z5ZwsZ0VU+IdF0YsBWbB5h8Q7WhwayyMXMSTDa2+9fbZDvUqQ1oB5pqN8LCyi6bdLnJ5A53rCEJnrTpQS1iVB4It11rjTrlJTlT3cPbgFEcAKIrMOMFIpkv2GJcx2HTrah/XxveRfdfQgDj3fJ8ebHNSIj77mWamRkbmR21vSrzVCBa6958sy8LpW0cEGVYLF93HqWXQW/xlbzenI4Xp1FxXZ+36Mnnb1oJxKZe85FVC4F8SKfZnL4shANT4ePOt/PruFegZ/g9hI849rRln7apITcSwpjfC1YVQQS9UJ3c8nzuRYpbfeVkccPLBs7FbB1Ho31sDIghcGV3R1ReaYefEnwbi262UDvGZMHeibybqr7em2SCM4MrqFE8eXPrzXXGMY5bnDgHRstv1JyfikxoVcRm0QrXPxzHVDo2fpzc9A3Oc7jNfBqzH8wwga3bYFThJgPOxkW5L72f2mCBrDudg1FPEPGU4e7fQDO7BnS6d1/IkG1Z4+GZGkgOAhz0XbDcKCHTVj3GLdVjx48KojT66pej2VwtLYwOvZDM9vgI87cZIuctjcGm8hH8TDQC+8riztxn8WM3iu9QLsw2XhQ0f1p5c1CwW+UmLzIE5iZK/hMBlDWaJzWNrIeXHbJSGUwepe5ix3nkEeqkPjWpJoCOTcnDZEbbGzKCqmRK4x6hgen1f86duD4uI5Xrx9iZKX3d9kAsbLyKWfO9TrlPqpvgQZtYMcEZuRGou4aiE3QcSKWoQtL49OKRi4HqtRNyO9MZJ2jIG03dkjwz37upTqeZkgyYGTi4LHsuyBmFLLdgD179wyb3mIgDZ2p7KWKbrEe/c1EZ2hOhKPMXjnccrrtQohadLu2ItcRKKvAorUGPQbiSefeEM22sGSqOKdgz9dUJO5bPDwgpeK9kaPxf78Jk5sw662EmLMw+3WKNhrKKhQ8ZSpOEmax0/gy9cu23WvKkDjelEijxlYSNL0l2EB1Ek9E6CGkS8EbqnAV0D57abEaUcnyvlUApcy3mZTi2navBKwFJSOoafZ5LE818t2Qk8MVJq1idy/sBOsxQsuP6V6RSj1AYfO0ujzaoKKPWwNG/7RY/Oym4AHaYWF+beRP2uvvGlA29pFNUYI2xg3NylWBoJcgNRrLrc99qIO3AFKkHTWo/m2z5YP7ffyws/mNP2hFomT313NbY8qSAl/Eucpur1x1vmPf21vShUGFP8iln7mIKHdn5csutLkoA84spilFYA9IGmDoggl9Cx7RZiL26kAQzef/yIUHrIVQ3wR9hKD/02PfaIOviL9hx7eA8LBYIFrNHmY90QXDV2wQR0IxIiBzt1/M22is7yFoKPA09s4dwtuzaCgWiTamS58O8neQtNb+mBnwo6oY+hVE97V11UAeB0IGO6l+PogwoH7wcy+SWegLmTR9SvrBtS45GURav6jLajH9nY4ZpJSa0i5vrdO13wVVSp67BR+GI0boAtpSADvRDsjSAwZ3rO1ZxfipTEzVO/NwsHb9FSF2HNWWtIFhk6vDClugTI/0SFxqGGXVC/G5ChhqLQzW59VdpJ+hWSvc0awqNSJwjdP46clqY28gNlm5qXqxHE2UwLslqgPAP2kpKrBixcvHI4W2yrkx+kOkXUb6HpAdQsmi2paoBehRKeSRekOT4TgLHPRaQLMw2Pjo10lmNbpfckl2nlpdRvyGny98YJDPSKEBxq9xjVxQ/d0VXPwd9eqUYVnVC/7CM7KYyYkRbjqYsY2FirMKRDyIlHG6+QPzVrTBeLBgeZcR01tEeXyPegSxWJ0pPG57rn7W8vKjcVyeCYodD2Umj38Wx367Js+h6m0kWonBix6LywKNmBMia7Apuy5W9aPWYN+ngMSta4BPMXiK4uQR4JFJu+1vA7lemvYuLDlvhfg61U32ZYkA2yFEh9QUEmK2hRbTSjqOeg5HdqgalGAau9ZrCS2n+WiiIUFfer0Q/Xj1bkG/AasWg9xmnTeU8x/VIP/TK62e+xoo+waiI2E3IydAsKDpJ9g8Lf0zh8HKptxhLFvr+mzUhAtvmCXVLymhcNiV67YiyY42S3uXGvoxeHe3M7fXF4wH+MnP5FP8lLXWRueMJDeA8A5Ux7PXL78Esff8MvJHbvwx+ROOYgvYsv7m5DulpTAKiOMp5fwPL++hzETjoaPZdh/ikjK1dNZO1nwAE+AKpeSDRKHlJWw2DRBHbYsu956iWuS06cqDpf4XurLdOMHP9XA4sMVAkjTw7PyTfXRyJ8HudgiqxnEQPiF3wSpBcrL+AtP62EXg8ajJ7CzrK/ANtxtW6dH0C1SZwN8RRbcb/fTW7SLGF5tFZGJfEY0lEM83zRyV+hDEW869A6tQ8BxahJXoXfNkpX05ZF0x9PtUolOtJYVwu8VClCtMTrPR8vDBkBNJHAH49iTc6DrRewKBeYz1Tt8Bjl7xJAZILAC1u5SsfKAtRNX1c7S6pJ8aL2ZTwwQTyq3PcQABavseLMyy5o87eOx4s3ujRPUigNpTqBqwmoPemDhfRYTkDetnZWbZdjj0ZP50li+MOQBthQL7/Otrou+yZBTQK9meL96Au+PTnhjMqlonr/C/RaCre/sOh7PHASDZy/zCbZyHzMM2BColZqhTFTr8GAktoMHSe5a0Ht4S/eOifnXcFk5prYEmISievGtWFZXKniZ6DDjnTwe1AnWNBMnE7gxAm40s2ERh8PyuKT1Cl+SYkJcX558OdyC0oln1DUp+GwMEn5CIU2UUWXz6osH75Lm4OV202SPNKleQMcr8pvC0O2ex68vxXrJnpzFFTJGMQkLDa02giA5hknOo2IjfCceygmUhxx2usVIuVt071iNjxuf+kXHUcz3linAzAjsA+0Lv/ZJ1tV488mELl1CY/UkSXx2GGheH6UZhFpGZpxfIBHs3i74SYe1q0K4IMXjyYetN3J8D8WSxrgU7chJcDoBs3S5IqiIaNYvP1EOYwOLSWGBrgo8+DfBqKwKzHQKQ+SCa8VEf7YKmcHwAX/AOAQ6W2FwNdVXcyvQw3StTx/vabTC6i6CKqSkK9sPd3jK1kvO4B3HdfKIgLFgrZ0jA2lkycT7jgwdl9thIepzez7Adng2TIRCgcMZ8bQIJlW1jFcN0OrbP1Hto3G7sBO6H54DXyAN1CZ5ggzj1qCrTVOkaGOtnWnN5ktXDmvalEi3W/phkhWl78MV2Nh2KndShAhJjKg2hNXsTMR2kT3iBmIQ/ngKE8Dyr49MZu8JvHPmqe5BXsvR9vxYJ0llGNU6ga/DhokhUAyV+SxAm7e+cZ2S9+31WK6Ia9GE4TnqpvL4LSaq4hnulGenYhM5WydwWvQJdU+sosmdboO584O5yLmZmNvbVmvxWv25dsWYmVc5nXMMwXHrtGAzzhOxkKvlR+058ECyxIM9hUeFmNGbPHp2SwmVQMUdf9b5Ej1Z6Gkj3DgooczU53kAjKNU8SVgCVZW1QvX9cfTUISwuqkpDog7gxwP7wDUR3LCSGD497pDLLq9iIDEGe3SX7InJfP1FW7SE6ymjavGTsgt98dSf76h5wt/kYOHrpfJjS3hLT30XI4kgzDPoYnhnbxWjWincsLjTDuUbIvY9QGoMoVIYdK+SwitEFzdizGE9q5EIvctiagFkv+2DGvw1tZKxqjPmDwpN3gTXa6hOUeLtC9bXLIhM1tVU8acD/Gyv/izsC6oNj8RO9KpN2Hb+wUDyIt4XX5j7zdvgyI5EcU0kqPt5lqVqoaC9qAowNiSkj6NQVTNz9fu0cWAgvBT2ee02LGxYqahIUryfI1cCvPVWpmfFEFof1mFEzbkF0diT3Rf4TC4FsSJtkOfdaIM2w1aiHGWbl3od6Psu56OInpKfKYaUFx/KL6+GxAZfLjIDDfMjzpCrUVo9JUzlcxAd1stfB1WI/ZVq21hU+1xa4fWkBYlHFjqsZHg3W82rgHdIWlD+0zgY5PbMAIF9TmKU2mpyaEiv2rftHyALQ4ShBpWtW4FH7vZVX5cKYLPi7zRLAf8Or4FqXWucCzYUu8DMDiibrcCuEQUuEm0oAIp4dYcaJvUUiG3IK0/TSD90cVwXdL21YI+YPttvWk7iPgF9Dq9iRpRQPQ9VKIKgDL4qhNJIgNJ6CU8Fsi/6NJ42366HLQP0PNq1/VxxKuPq9DXzsazTV5LuipgM/tjMh58vmIs6RyF5TpMMHX80d8JBySsLLPiFI034qmdcvzIKBKNGwjLqEF8TK8Bnib6hlCQB+l2hPcwBeleKSe4LpnuxVtkJ6zQddG8ebxYxNKdnY1D8Y6uiUJUW4iS8zjtrtLgVywEEXlCl8ov/PkIQx0GyWN8RAVW+bIuQu/5gxSF46o2bZyW0YHv3git5x45Itf2GST1yuNMUKJy+0j9SZWOZeWG7WYQ/gRcdkNDscuFN+oaStPkm0Y8sXbYqMl0wg/QoKf9HJoN5gBulBSRZI+5Mryk8YQtUQJSOIR4K4Xxxi0SI8ZjSXRCtZ5bsEg5WJrJjNSR3FQm9wPq8kbzAdN8i4lUtQuglhhRpCJ50J13T76sgOUIvmp3chyimscLMdEu34fWNJf698G8qTWBdo4YuK6vr36/tr4EhagJ7ejUaRIjUFY9jnjJpdonkE5smSKuIhhqd09gt6fkl1hTTo918NEES4WuK9cwbKwo4Z41XCeDlQcmP1lXjN+cBHClKdRASTNDfd2YoyPnwdFL2vfDcfUu5lvTQx5Z6j4tZGPqxSuanIgvzL2h4CDMaKN8GiyyY0yjQbS4LmvPv44YE4imsp/0BL40AemR1cP4GXPAd9lpI3aYacFdLllEIfmKliR4Se4tAj8a7R1TAfJtohFgMyEbOOj7i+lcKgUr1NcuMJBpLf2cxDzYbT1C1gq//Yj0PUK1qdEzXvkrkaJbpoR95ichzR+vtqs7w6FksJ1OGDjPuK5zxQOwDfQAezGM8EAytiGDW4ZFGMlqPkrSERpr3haarS3snqmY/5u899iVHAmixX6J3izpWfTe7eiLtujd14t5e94DniBIAgQtBC0Gt6dnLouVGRlxzgmTVftPbmhEFZGKWId4GsQB2JaiLXQLct0PbiqSY7gwiqS9AUlXwkI34bSelSdDx+SLiaZrtIhdZaX0zzDKHXUkIb9wgZNSq621+esGh687H+EUsiK7YdeBxQNrA2+Fu/v91BRaxUKqfuaZ/FIqivhuUqC4fcmcPvc48UK6vNrzdgKdsKJnxfoJcNdHK7kDFE6rLwnq5hpr4vg4GItVB5MZzeK3oZlX+Kk/nMlLra0lh85kerKZTNMKojLvi0Yj5YUdRP9Iswk3v2bEN6RndNabVSBOLvBiD/zQagGnx/t0DySBxw2El5gKfB2ZpjCcUC+8yuvIC+SXrfKL5CU9aV8HR+QgVvdXet1+sdPdV7CQuP2a1/6gsv3CEKrV+mpZ/6rfCkVF+1JaS8+ctDc81wvUer8tT2FFlXjYWXAkN/jy16R3tmRqLiUixmB356WZtwDPFeBbMD7kpdwm0YImGfCWKTC36kXRqNu5YL0vjDnHsh1IejwLomzFacWdl672a5IMuf7t46JSftvMXWQbMkcEf1K0pwY3gzSpKojORL+9DH9bYL5jMneRqhU/ueqkBLI5CqTS9BfMlMjavueJpAPHMH1S8YZ8mNFFhWR3673kq7g20Pt8o/6r89735LBUjKeLj1DBm5yrmYQvQYiyLkV/0uDpAMEhVmwa7cZeLBuA+9zbsOc3GCxwdD/ebP7Z8Q0khYjXvlNCGx/T6tmbRpgvSkR1fBEVgjCrAi+uat1ntumKRXQejyfjRfErEaGbDRya30IDcKWf3/ep8DM6o/aoMgv4FqrPLgq4ofIoMatq7IZi6oE5Vn/97AgZfeeRb34sYETkgXplc/z8o00hB2wE6QtFFB1tM0EfTA5sr1tyrQKJBPwf0ythqePlFtvGkH0xZUZpCfy10hipBORpkRIK0KnUJV9iwrOyqAAGsJXWwrDZLfSQQ/lbJvCfEAlyr+yQJPExblwPZRs2Rjlyu16ZQI2ZDlNlDWzISfqna/E9aW693/aa8ChUfYk9SiXjEBjGJ57EhBm1swRMKwfTH0RMaJGWAWsxD5bJ4chJWL31LVIvlTC4C7HfY9ZOiTvtwyqxKsCr20HZcSQ6l56GD0DYMkktgywpoY7To+bWMX84WZunl8HZ7TVZmbwxF26UUNxGFofdD2+nyIsit8KV6cgLafnx5PXlkjbx41SBiH0O+awOIcAgc/rpHnDuXvZumIOGrGMUtFqa4OGF2CN6bx+DBs7YxlezkAibXuOhNfJGpqvYgV7KTgfbDyRsq7F1Dvlc6bkEQGXX3XXz8KWcvM7Y1f78S6hgVael9+Iif82MwYzj7XBgV781PvJaNAvqZBMlh7VYnqOnk277xClTtWmtQakGbJWFKJs2EUTteBk3/bODjAeprInOoyOF4KMmAU/+3FoibyHp01bIkrkGM1qo7eicHJqWfDikmHZoXnZosljetgG0e6MnX1m0OftC9zDIqCJkhTLTf/oyeVHyltYonFzYtQs1h6QZ/9TGD8iclEmGu2ikxfe34BZko2g8M5UKGMC+eN4KOuFHGo2Fl7XaB+MFa1b576YUfF0K+4cijN8DclXwiw0eHNqPxxzFr9xPcz0dhm4a5g8cvW3GHIOAsPz5OQruUSbPdk0zVQTV11mZl3/dqrqqBf8c8VZBvim1/84fTEs/zYe2Oj1fSGjC0e4BOSbvBljQUY+ZzvWZdjhh1rJqf72P3lwFuMHopRf0hwkqi1NjIJNMxxfzi3MXSPxPxqArkNVZgKwLj0vIAoR56RwtXotlPcpNoeeXjzMglU+xiBxWBcF37BrLQJMJfm/RALeun9Hxh8QjkJsCL1ZpjEN3HYi3WFxmXQGBVlL5BQtgr6uKpM0F1CU4z9IUAVPZ26g1trOUAn2JrGDaWfFgnKxoA+MGdSSq3Xte1e6HdjwnI3tlKy5Pf5wWmw0dCkb70+LaVgHLUU/NfmPnaxsjsOTjymzgcByJ3oj+ZdyVMB67edQHT5woG/shPiEwAVbZLYD4B3sDDKhJvtJVOMgAjIfWbVKK54A4LMM03uHpzMIyVmfquF3UT3X3CvlhEo7hrXQWhSr3PAvWC8epQAryDAfrjNla4sknSn0LFyvATpwNQfdMjKrPXDuV/1ihXlELTaDoETumAPxdZOsI8pfNlg349y1Xsf8bLg7hGWJKwTQI6S0ubdDO4WpgYeDZ5XC+QE04kgkcfKCgQSHbq7xmwGqKY98Feb//iMKYZZHIIbi4CnpMBbowMqOYeSdTfup4sefPEeW5b9AK5kf/KC8s1dhDJLswqPRx0RQzBr93a0JiJIQnANL1+1jUsLmhVxVx1KavqQ48fmel8UWr0pQyLytIAXPOzqdIg9j+qzIuMGPbZXaO+dW+5aQo9fwFE7t0rhbhBQK6LC90kUK1bfi9D2rchJYd5+5Li85qAt1TLZF5YCAaPqVXUWSBjeVbx62iH10oSkglpi3uNx3KDQpMqDDbFCko0z+Kd2HKdbcMBpMy0Viofu9+F6Zj61k+9SLSFJBlgL2WStZO511783Q/E3KGee9jCbprxQ4KyKiUSI9Q00JYhcYkbwd1OoJ+2fnsiBoRyEpqPmQGCZXQABSaZ0Hd42XMNI0MNG+6rE3LXrOd8BbvhzG5MAlOEU0nACtbxzEaGUFpLm23+0YgZxaGC4n+Kmj3G7Yk9TKSr+6yPlMJHJbHXPIU2uZiqz/sS4xCHTCqhWCP6fHvK7AwQpursqyV/dnMKF0l5J5Bn8OvLaFPTFxKtmwW/mglzFb9gyX4C4t8nGzcVhLBhl1MPhJe23J+xabjBKHyYhq74qy/35pvarmhLIFm6vPD6oCGijAvnzOM8C0U4AoooYfroDZnQq1Y4BJWGVCRIpm2Qsxn5QctH9IIDocWLZCgzVWP2aXavX69rn4cAsuBWiXThE/w4xuZr+BOPFeMBFX68Di72ceAdtytGzPR8t/VuoRxglFTaM/jWUvx9f+g+K43EsYMS8ylLf+LEKEyXL0/TEMIcm08JIGXJEQBwT4Li4YdwKvsjce2NSoHmieajEiiY499loLAsosm7IcZxGzXVtOpGwxJprAnPhpktYjKz1gL48WH8Pjzhh6hdGN3T/cl4GHPpW5+lYFdoimDrmuHTgoBShU/n3sdd7FCyBgGQaWxgW1HwF/oQdRJom/jh14cPD780ICUtQGakxAkvfkBolZgsyhVHT4ai0rs4JWoPPz2+i9eR26LBgSuwdS9TvXKVmk2Ryd/zsXRR+iypF0i+TK/C1lQMR7wT/nMoJ8pNZcC0bIqfi7fGEbsBf6X8ZLNnQySnA5k5ziRhVq+hr8ePumRc6+a8pJSk43sKNhybQ8y6FpN0v3VjW3E1wK6G9gb4AWdkAB8nzaP+HzddIhJ4o8+Ah07nzJW+NIzVIi8ktDBA9HMrM3N5QSxH3V6u8/E/Hs55UIbY5SdNIIcqs+5PW6lY1btPOHgaRvL5Eqq+9HeWWBm9r8+SaIu2pjiwHLc+xOnXySim0qTcBtAbNF6I+imlCnO7XzMD/5STCMRn5pERT94ujiE95IsXxZka2rYdbY5LUSYzdV2XEd4/IDyhS0O5rKkthF3fgIFLTaxzn3BkeaCxwbvXPByutZMHb+RJb8VpFbAuPVajGAQF88PulHcgaY87P42gPQA5Aq7nb0W0DLKTm2sd5hSvT41J9CQQpjP/ZXYPPWAtN1SolzIkuGG8YazvysVw5KHf6RivwEVxBUiJK4Fx/OfTABQIPba9AnO31GmEKI+mQVxZjVha+pzy7SUch/483+RvUNpTTIB4n6c3NkKhk8fdIknkzpUbvhYQobBrGwXNPmJJCPqYxfHA3zCGGSAmwE2mMiFuVaCZ1jnSEWrJhz9RwhEx+HVWCCU7UetpvGjiLsDegjRdUykmH0BTgMH/mbFx5zoA5R6/FPzmzpqhERcsZ9lzo+ahc3ry5odhSSK3/hv/xNI4Y5E67oPznyMKu4ez07Rk32d4IxBPUFqGBB58YXYOIa1ueYReC5NaSFxiG0oA33kIwfHk151LwWboYVhygbxVvx3owG7k97p9428vHEEyF92kacGtmyxJAywyrBof88u9DeTqoX6H2eI7DrmWsfNvCxhf1NWUNw0IPFb8IacLZJiswH4lqiIdh/qVI9vAVk/QBCKgpPU0fybHeoH2in3304/XpTBQJlnH7t/aC+xBgTbiuPx44Lg+L7N64vMquZhQ+/Nf1NHtZ+KCXltzdIWQh/TqzVdyTg5BtP+KGpcY9IwUMRQDyX6Sgy1huDIAKz+TJQxPjgJK7CL5ebf0XUOe4Y6e3Zhhek6rsFLxqOPgh1/fQqmv7FwoDJwuOtQv9v2Y5UNOHXjG1LUreAx+4UQV3ZP2Fec+tVcOYGj4yB8UeyPlJaV9kjVAzNK4QkTWP7xrA0OeUTboO00b6e5x5VgmxK1W93KyXKv19jN36Mw5FGbRHnY5tMEmFy+mr4MzwdCjc4A1+5E/U3YUhqPEUB7JZsPHFIzBSjjT3OgKfOlrSk3Jn9sMTLQ16FWFB7QAQmjZGEcnik8BfqPwVS0xUAuKUVyyvfXahe7ssgctWyuhf2td58YDA9YRODXjAsmwrG1+sJ225wYx9G38wIUaIlv2P/ky8p55a/s3thkee1jaUvJbHjD0HJ8VSbTkcAb4H8jKncZr9rRKc2/sbWmj6nlSTemZWmKxszL41N5xs3r8vwejpZGeTiI79ku329wfMUuzbVaqcgW8D9jJDk6PGploEoHqNwiDcpnxXLmkz3ZZU6Sqfz0oBt5egqjSkFK9nN3zeglKmb2NQr2T18/7NENOdgVeLSMBkmyDsNzZkOKzQDpkJiheT7T5tKmTcgz/asNjInDN6x1umD6K1fE4elH1ilnMsic92shbcWGKSUKhXcQO5s56RImW4HF4nJ46LYiCWE+XSP0XAuvFX8VFqP5MqplSez7Fk/L/nXnDJqWjS5hdP88IpLM6cVXWFXCfuOLlVnrDTIlvvtfePKCLPeehPz87OVLRddDRN8Slqi1mGrdknIBvXWVsJpF1VKH6Y/qlLxHdYMXvP5W22s4gd6m0jCfQpRq+lNMGhMkCqM4NzZmANXZ5hs2+RStfdVus45Ct1vt6coNIDJqihxTOVaNf5CzKJidOCf/WsiwmPmpGQ++a/qwTnIXS+76R8iBR7Zp/UUEW1wD7KAxQ41HyYZqzfVyLCA/+FOMK8jSqnr+/UiGIAutyuZzXJ35YOJoSI+r64KA66EtcPFNn359j+mVfNYT11Lm5be16AwzccMn+I4jwlWzyOxu6tQ7Zv3CPpXiSk+RmU3J+Q3kTPyHHxzq0YOdow1s9WS55pjKorTP5yn5xq355F1uoSPADIiJyZ/X8/jBMFeSqIthtT8mcY9zMb6BfI3npRiwGDO570gpugqtBoG1RUMttmlyM33mWtGccWywXwd5iLKf9rveea3EP+8agHMNecAwqXzqlqjrDxBoSmkq+JOIinXNTIyjBk1Tlb6QTMyEIE0Byn8lxvESnZMnkdzXgEK0Sf60hLyxlUhaFeIFFfXH4hs5wUcTzw7dxxr6Yyc7r5/nzwjMzpj0aXlpYwErX3tdBbtw/QnulHZD9q4VtuODYGnjA6uojtFxP9rGKRb3iCjMQuGhjucVu6sEWc8aFM2TALHOhgiTV2Q7JnTfcL4tvnfpt3I06fKVk+9VUcw5HQ2uF0fjU4kso6D9GIaNTdd32IB1pVPuoGFn4S5HOxHziYjR4VJUUUCpNy2ea3pQKTJBv3pF3wVSLs1nZl1fiURfyTUaDdQvKS+UiO8Wg7WBlOm3cxa67ZG1KwnedaMu478k3O+e5IspP77bb8RpIrYfMMz2YJ2NL6vZYK//tj/gB7uWfIaH732NNUMMr+bokph10/8bi3GcpO5ZTyUdXThEzj0jsQS8U91jpt8fMsiHsfyntSNJ0KAIPNFm2kSJoO6pn5dBYtx4rrBVok7xRlw1pVg/m7GIjEKmW7PX2ZELCGIypkcj8YgW8Tcsbvu6ydGtvOpCryf7Fv4hZnuOFg/I4hZ7fScXbvnQZVbtWf/l9VGEfj+xJzFljLAvUAwVS+z35C9JQ4Qp/vi497fV9Y4YfTJNVOovKBWGTfu1fpEsJxRzLpSbbB1jru+TDQzEFhokikJb7/lZPnNKws2rnNMEU/W7nj0B1yLMW+LmuyQjs6BtZGrTmr2Mz08OSV43X6/6S+cGxYBL9uP/gu8jg9M+rmXIflooN66XwmRmwuX/9hj0vL1B80NkXV070+8XacW6HZshWSIe8QJCPfdQZ1MWQ7L54nnECp5AlEYVaXJVDANs1w0/9kN6t8BwYRYDHfjfuTgMPZr6kRPmDCTh58fcf7ryfjr2+mLZrv84gKLIEgwJTjB2hfh6+nInUwURbq+LPeWjBKa2EYbP5Ai52NPPf9GepIAYzK9GHTQ9k2oT0eRFZgWOoRN4niMdq0iARvvo8NQJe2/OIOM38S0j3AfYqzud0KmVogbdI+haFB2/48AB5Peml8o3abaNAbJS436K72E6kxQXDNx6H0s5zy+SJdtom25WBR+mGH2H7tP0yfgFfU6rZ0aPF0xHWQbMwVn0SG5UEprsE82Y7Nofi4t/1O16XDhSuNroPiSv8gnS40r9gfjESLlp+4SIG9TJ+HMuOfRJy0AGi9nR+Q84MnxdjEutxeZvqNPn833il/oH0k0buMyngzBx3ZNqHnq7bHqvLIs4onrdvsNIBz+1w5/AtEhQA552CnUZ3bnZNnI9tzdCOgAVO0j8elYZL62C2FhMG5vaUGMdz8uIVzkhLljGL3iBqyxAQecCp1qKNyZWF0eZsEtSZEbH3p24yu53DWZOHiiM89rR4KHaEWUZZMO1vx44mX2w0pMwjmaI7wuFf1wIE2KCSE4oN3/dRi9ll9tO0xzH6H9QxxBjkyrFtBGLdwpj9m6vmq3FFcYvG8V+CzQ6ggI0VffkQ5sRJmUPFsm/nCGrYyWOb0MVqEv8Rv2qudfsuWBS60W9Rg9EH7Xtgzm/GipNatEhjvjyDsQB1vZrWi1jOeTUyUODrCL5VBHVuyRCVXge/CyBSOWuvZPNACct9Dn2Gs8vbGAnhHBMDVsFjaVdd6YKCNBRVGy9doqOBhJw91oe7mnCKs5WojhQQ+ikE33d4sELXoUKQx5PGyV4JjBq0B/JVuVne2ba51AK1XKwpWdXh+vOlCSaNzpU7Op5TsjrYUVOrM2JHMZGjhgzoNKOud7AtriTtbLQ6+x2WU1Fdp4g2eGMZOpDZddLZ6h5qcb8Gz3+5lEWMi8c3D2iwoF58EMsTbOwPZs2kTnURS/bH4dRZ/QDy7aF8yWrqrbjZgqeNSsD5zvXWeGVCzvNUFRkOWIYR9K1H19qRFTvjJCam3T9t7xulIzyDRqU9BvGnid2l5JS0L9Z1uHa1Yk6hJnL6YGI4Qz7D4e/wSTUvk+bMZHDdqS1xmqs/vIv0uVwHTig4A6MxmOzGvESG7kqRmh+Hq7hwLrMh/+63kVPNNxjtIECPEWtf3VCknVejBB+og/rhF+ryFgCSa89N7Qoaz83AgmK8rLyy+byDrKR53YnPpSHRywgybNUD8wSPnGNqPnaGPHF1ly+Q4AaJmovCngZxfR6DkDC8Tm1ubKjVv1XgvHAbKBgfs+5t8yyd77X+fLX341ZinrFPil5EyG3Kg4LQ9W6XqKlbVL+TZ0dwtnKdY8xfivn0Py/okm4Ju3Rq7Qn28W8nAkrwifZLXZgSj9cBosw/M1pio5Jrr4zE4a5T5rL4zHMkmO4IzVH6l0JRsMYV9SQX/TFAA3ET45fkpy7iCI8paywgniXC8nhxTIyawhvQAoXXRGUnVqcgkl+G4lYPohs/PLpbvIiZ4UNlDoTtwE79CLBZaHpl0UFZg0GLV8mU/t6Xn/Dvv4wwFXU8fQ7yavGEgryhnYWFdPUJ3ffTkVCZaRJWgMIrUvc7iN59Jg27JVW0Pj0d3vDVfzGLDRWgA6rdCLH60LSFzgC6LqWJ0NHkU21bqnR3hNMnwz7VGkb0DvcqfxB/3D9sTxoMnkDrzFe2pLCFM5p+y0y2uVT3VwMVypFLBGYPho/+cotkFzgKNhb/6Yp8KNL5DZFr6hn7ST2+eEBQDr6y0dTiOFMp3z6VkUS/eals7LPIVtxWcbAKp2sOhYsZMNRAwHW9p4KB+K7lnfvx6b5lp5WGZ297/5JqFtlkIxn4zCxV3cVvVzH0uJ+ci+dsOK7UB40RmrEpxw/HfJcMUVOyDc1+E7LMeawc4EO3P+CBQoB+0A4nblp/n3F5DeurF/aSB6JvNqEaB4xdbJMN174us5RlJkrqkXv/lCMe1xn6KkIpXtys8tYIFUZRGbeyUM+xT1LHONdKL6dFNeUKVpUm8X+DQoOVqVpHSBipCzvJ+5w7nt4zshZRXrRBs7NjVWdM+00vkEl+q39nz0PnAPSXOevifo0u1C3Vm/qeKZPKIcVh/iP2s6TENTWR7YdxR97Tla/fhgjmkDy+7tANKRH5re5J5JfQ0sdf2Ww5ZAZpo0hV0RDnbtpV4KmJvrihiyuhMxJcwq8kIuX6YnuQMwVk4XdnOaL7g3OBTDHgW5eNRnT/74WupZZ3RqwCp+Bw8MWqPmRKO6EACz81/xJ4PTQLyMxt6CWQXxt0wensrU6WoOWCHYEkGt/TdSEs0YoCbH/wa1dSsOaPbkj8Y1HbZCgopnRu9VHSjlIwSOWq+59f98nTZxt4oXHN3r5+v3OBmBs6/x5T5YmsyQyXUTvO6vs1if1uW7xrop/6Gt2r0vyuysI2RSkFMG9IwKmMHpq9q1oUvVfleZO6rHQYdjdyS+KQ2LywocP9XzvdoLeyArPGb6M4YweKnTE3NiiIYoRHYlXg1WuR5pQRNKC3hBZk2Qiqhvm6MsksYNfYIcCXShc41vPJcQxwl9FIaR+bumto0eXoWmwL712TCg5Vd0c4r2nuyF+rQS7d4jWjwXr+HlsPhrEJta2Drw1BopbGCrHSWngQA2VlfF2z7DxapAKNe7ybKNBk2v5w5WXwCXl1VjGyypsHKNGzLx98WYSBx/Xz1Oob6id248+vkHvyhrzWtYNlGywo6ythkwz0GGjwi9+cGTOUUnbI7odO3Lc8gAoQmheM34cJVNEGb9fZqtMVofNlz7drs43pkzp6CG0o9s8M9IEHanK/gINiqzZeR8/IL+3wA3L9bwc2eCkZU/Nk5lu6Bu1TS5Zas495Ap7CYriV54tnfdVsfD7YWLD7cvXt7u71uRM7nO20OfR+IceoBRrGhco9KZpfJzpYBSJOg33augv2ae5Rx8KfEaWpUEbwENvZATCjYHWboHgdNd5HGUMm0t5wlILZKb/VqDwtrUA9GPn5ajLn06KVaRVWSCiAWftVHNRSr4OoR3eqtgV1ym63Ih8bMO6RP0HYKl5tShk8lHQ0KUt+WjG6OJIXGH8lVMuxE9CXHT9dKGlk5omKTiRY8jTAIE7a6cRK/GB9PZrTTBkfSKyKs6NOibLexIwzT88P+YDkzC4r+Ugtydkpw9X6VXiaGPVsQnulrnx+T0/RgFfRGreg0eF3/Uj6JZEShe78IXLtOKeJu5e7suBcN41cScLv2ugQ4qGuJW2T+fwPXLzQaznyIemjvpOdphxnWKfcCwdWXm5G93GNL6SDH2438gheO1+U8Vg+zazhzMZJa49nbjGHTB9QRy7XYW984UMtg1Dz8lOguVl5a+TjeMbvATqRL/Y3xhIFjiTALqI99t+xOVzCnIsRts+0cVJer8OJDOl6S+Tut6jjL1h8TKjXoQtTFayTq9q1tkZhf1I47XICS4H8T1ATOuVRjx+Celu+b/fvioo860JInNHaPdsrw3I26OxI95jOP0Km3Xk96fzIqc5mX4vAV5+4wBxN/CE4kVAXzedXXzmYwhib//UtkegAiP2UMZ9bU8tvrc0POpZXqOF3qBC8eu+7JVbFeavdqIoc3vjnCscK3mb816zMEc+1mL8eF9nN6CNepEGYarLGDQmdLPJC3vcnfboU63xGGmTApxfIqIr8/3S/YuMaXsmT1EL7b3LfbS7exB8I+FKD4DANfmjO/lfPeT5OrOfYY/GJvK45+jicNxYaUhQ1NwPhRPdGjUykCwE0+5oPsq+rUSWI4Vy/Vn0YBSUHFufFwicN1c1NEdx89AkvyJsGr4b784tESa5IwEyPYq9hWSyFOhUrBYt4td75/48q6ah6gzAWod701g5Xh1w84Bm/V0yFIlX4KzV6AnFkUFnpFpSL5dipyk2uvxs/DWhj3KdkK6UbGMFdjT5uofbm+kORpqkRGK29sZLqG+Xc80qN1F9s31bTb+sxxZH8JV92Y+AgVpyFt3FsNQgBerr1m+Zj1Oj2/UMVV0CpZGUC9k3Pd16YmkxBoDriiosV19jkPuSNkr8blLNegw1bylNeAaVxw5jBTCIl4qNaawcWitwHQXDRD5mwNyx11zDLr+ouZY6HOajLNoYipa19hJ/IJbjOUWTM3Yu3AKZkWLd/zBgzsVPGPBdH1udc//0FtmELvekZVfYiU6lyDYuvrwc50Pxp3OafD0TU0ajukPyPLQajRVC3LN9GJNFWQgcIy3ejOUx+W/bejqXYFV4R7+BqFo8a4/dL8WkEFyzj+OxiCJWG3BG/uvXr1gYJPjcYAt3y5T2JtRawAsZOZ0VpvsbJX/SsX9AFlmwraRb/hmV+RsCanrUlium3jftWy6+zLHJTc98137xx3ftG1Q6eZDjbhvDxj9n8azn7Ztg2TF2MOMbC801ZxPCN8qbUJA+co+9IxkyFjyOI5IS/SCY7bBIfsy9IFE/W0lfbNBTor99OPCdv/Lwurl2rTboc7FEKAvA6hZ2nU7kxuU/XWNYqxp0AIjqlHK711B8s510xMtK2l00pA2rbXEQ6fZztX1VKCwNNcQTkvPvWDqjgoEFtbWI64XSpFooeP0RZ3j23RwaRWHO42Uk5a/95U2kaRBtugwCGniUDICCwzC2qq2ZecZzdSh1TC0MyVSKpF0J+YZoN+Rhl2K+iLr3Uq0XzOdl45ObVSwWMrjxInysBKcdtRX1NV0DLuemt5A0TMJxNtZSZf6GO6ET94wO+eyu9x0cHXrux2vQ8xP/jQd98NdjFZQTpKngQ/tuW5HIwKkQZTfmBHQicZFpm3XD/rRD07mA5vUx5oNGuSjvG/rtiELyNaaDp737WvgLEwfFHnljbr/mbepksdoQ5OYVhOpWO2eLLLyYU+Ge0DzE6HT0bWPm8thx5fgKLHJI58eizqnmRf3TLVD9qOgFQAkvuBOD8oQDlfNH+KVo/+N+S3PJiRfXinOiLLn/ZBB8bC40vnhVBBVexrE9drdzHu9Zj50Kna4TABUaKhjcpOSdtBX9RBuZIU1Z4H2dn7Cwmeq/rHL2cFFB70rzRNZ8whaxU8x7QjX9Y6nEp67YVIWm7Nzd6bv8yJasEw+T1AyTU+YyeUQvmXzJ2uIK/wru/+rU3a3PE9W7lJjze0xisulHzBuDq1RtsBpVeeX0xlPeMM2zyacP+iUC409G6O4Ux33gNW32uZuBQYg24e5eDhzzr6y4+QgTnapW7Jk7q0qELN+8SMIW+e6AJ4ixlE6naRBx9QKoe928KcM1hnZz9vs33j05cYUHZ8Dbi511nZ+VkysVgMPi0caHcDM9xfyBNFdWOjnt5ZAqgf1KUTXhK9czr1w1jSBM4WUV1wX/zVvgZSpeCr73nB0ba4CpWPj19XKKC5gbMqub6y2cDltQsf56l/TEgJwPu6FSBBz1LFzOD0zCXbTQLermrxLx5ovPzlKB54Bo4rFZWqp2IfPcvKjo6wnU3m3oNFzOy8Hac2Wb8asXZz4zuIcNBmp64Mt64iWJm0Zv50yhae+6isYQU8LwOpA3tMj/XcOoxp7wQ0uUUUACb6RhlNLbr+Jj6VjjJiYUMeV62Kf9WY2amgcRKylKu/znLAP48isb9wCIzBiaHBtZBukB6VeOaco+w4eLdJC3abD4qgmf6Y6adCLJFHsBL5K2A6hI+UiwWgty9DcEwrza1pDF+P3GCkHzgmWfmFtWJ8kErzP11u2rU9yH4LGmb39mxyErk9eXrquLvlTf5uPP8GuhdTFp7z+jGGt44x719Fe+81UEfJ0hPbvuBOpyDVY/TNLuMXWpPLLzJjOnihbCj1Mg2MPn/IrlKQ184OvL6hd6XYOvDQHliZ8YQnx6qZFmXG1X6Lx3E4AZiB61xqmKQlAQ8fAeamvVfvLtQwSa0L+OIHDYjHaIe+L/elmE9V2e7a/wpBiOr8y6Bc5+L+9T90MnxbjKY0uSg8XtXSbTQVVCneimj77R9NvUAc9sLrRmdqEifi6ceY8FXUVtdVsaJ30qmxmgt3L2+XJ9fjUp58/z+VnvwVUGif69hEPDEG9nG1ByxeZ44uYLvvw4lYVSThRc2tYnKhPDZqHUQQp/XY17vZdqRIWRnNYWZqB8upVPu445PmIG/+pL8S7rDciWmobGr2vSct7147WhE7YdC9l0UTvVCLKvrBpCYm0TdT9TtuedfmC1kMdV+hQANPF9UDdZdvGmEoH1t3KJw26TTcwdP+FBFPhYPKRJznjLx5phwRxe5vt90gBjfwTIzDPKl0DGd21fDxsOl7T3ch+LAUjz897GaVMi566t8S7BqN0o73HMT8Rryqbe3RrLyV5ZGZI+7KyRfBT7rkoD3OKZLkwzBv0bzzL4E3g8apK++IZPT/G+ykeYm9Bm/e6+Rw0M8Qgg6V1d3CtyCtFYFzyzb4cH/RrR+0wkihMPEII529jATCF7/6Gr8z3NafqSvtd8yiAQZlRq7JO37BCi8PJnuXdZ/00HZOTXm3c/mz17LfSP+3BFWFVUWbkmmF/IaadkH1S9qh+VJfZFpYWZ2WBcpl7usdi2yp9GswCRA2GIaAil5PRLOKpdP6Ax3+VEz23hFgKFVB5zttX+hBovJa6DgrIN2wrNErwFzjOisTaU9P0cyt+Q/6kqdcme+4OcA1mbQTvorzJ1bnO+O24xJnF3/jLDyk/8Gnom0lL/ieXFaTC5jg8Fq+Id+WKZ8xvWeJSFKQu+Pjr5HHg0t09fR2G2EmWFWEjcW5/er69GY2v8njX6uowzg9byS8vyl/0CPLslsbIDJVnqbnN7cfnKYG7jhtyC9dP0V3eVhFMPO4TohS9aPP+7O9fIxl7t3Het3lDFrKsCjp6/znBBfs+peuIXp2XpVlg9H73hTRXG2pw0xv0Z25Svfx0y6irGzdpHqVkxovv9u40PNF4J/feF+Fe2Xxm7sVkm6yu1J7rQ1R7hvr9nPwb4sH6WoWA6KvTcunZTMBV6QbTPP1XdRCHF7BtJCMDBLyDC7ArWrVzKhhnGZGdUH2xEhBqx3wnVrcqvy0WUWbu55lxM87vhDhVWzBtf4TmTJa70RQWg7Gbyu/9nNyn/73/2X5OrgOKbuoSAhejKhf+uqn6tzHRUnIs/H3CB+P+4NP3/4nJ0+P/4GvT/foEg/tcL56n/Lq4/m2L7/vsrFCX+/d23bOrv/3gqRvy/dXM6+v+Dm9NN8O9s/O/mdJ7sh/X8/8TN6eb8v96cLv57WNxRjGT/n9+b/vlc//Pe9CgoUFkvg+NnKifHjzZju4zQMbykuo4iNFm4bk2hSX/dsha5Hjvi/LsOjXXAyagjNA/AH/aM/mKHXiRrd2XB/HFVP74Y12EFvtNrQYota6MNt6n13Os+fsMVkZCinlVEQUDvJEGs4STwbGdKDBbz3/r/xk+8u20x67ci0WgLJTQUHvkOkXkmdwEpZ00Bf25suFlwQRjbjju9uhpd+UQw/FaDhXGbHxomXdjUIore9RJGhQRFmh7luu8cR10RSTy5PM+EX66fbDBxU8qsEXBeC1v3pci3Z6oC+yVR+r82IVqtKjOqdlSjEIN3PhZLRvgTFwhyIrDLg34S+3qDGEIc2kmR6GG0Rf/EUj+p1a49SWjlPtvWTYGRVkX+G0ELYHb4r5gFCFbvj9WOoPASPEvozvjh28IzOq4XtaDNT5e3w7wT2o7WeabOloJ8qe1f18C7bj4r8fO9L+uCg8xx3gtMjeWZEFTwTGR9J9lJo/S8Jzo5fprqqY4TUog2g0AoOhx/K8+JDMVZnnDpM9ACwlVwT79u9pQow/vJQZnbnNGZ2398/BOIf4XHRNA74XxYc4s46/sUosNs7opb/wsvXFf7EyWE68QkqzKrV+rCtvyenPAQMGk291VnSolBOpwxNgiAzg7WRJHf/MV/YW5CdrUciKQkkeNl1pUfpZvZptQbwl8r2T3+jTmr4NNpN8eCmh8WKzrlDxPnlgoaR6O+5ynztf3BhQ8sNWmoqwC+7cp3r6n+14dZ3uLqJ0uOVGKeXWGo0YQff1WDq6fwJ2HEUEvE9fYVCyaHczaM/M82VQaP7fR39dMHRmwGxnN0+IL1y3kkfx1CER1YJv0umWs+zJn+2Ey2UOnEVh7KdYEv5JbIwh4/+cn1RSaFp2uN2jgQmd/6ojQhGbuZumskG9reUinJ/nKWHvrZkDd32LDWt3Y6stiWzd8nJCzJ2ojkazkT/+odMUw7zG0z1cfCuHoX6bk7mas/MzZPDYvw91S17+55e+Wqb3yYriKB3DwoV/EKzWtwiKoXwqCPg1CqKPdky6Zwgv5z8L6rBlMauZG49tlaSR/qAYUwbKT9YWHnjSxB8JfomdKe+JEPXK8Ii+UvRf7IHC6LLH/d4AYBnTVpWlMlZ7n9EmnvoNnU8+5XaRVDFUAG/T2waBPCszMfipumzdwLv0ORMvfzJ1AdqsqUuzx0Vs1kzDcGpmBif/+N1X6pd5t8qQtwgGVTpQxG5gO2OsBGmfRzXxxTNyr6fK9H4W+al4X+3qb9n+iKop/4VmW/LSY368UdrybhNY+aL/cn+xPtVHXT8pHx5bHB7cC24BmYbIGMdU7vQ0ryuccE2kP4QxXaiJREgEVLRVALLz4SwEJNcG0NHULKFPyntwIopjd1Isb1OHbGtxD+isSp2MLX49zr0cI069JJfrE5/+fKuaCYuDcv6/tu2vG6jJRtnVtRzCxDDpbmZKOLaRblysQ2uQ/7gHotnjgH8Ur8jUzdzkhsgGdYwnpWQG9F1A3ml9JF5H17J7mllw6YPhNKY3vK3KT+zNS32QNyh+zs9YrusHn4kau2SDBd+AaZYh9d83slZeh8s3SBSbuRzkYBojl2Q2wog11I3tFYgUN5s5iFiz37Nt2iBUkZXR0Y77tLNNvXH7bx3TYfDSwyM3q+a+1rSwsf+AG0oaHKAYlQXIk/GY0imp7ir1bpWcfjX3iLWU31z5tSAipZjjqpOxwcExnWl/2vHVpUt+a8EoIfdgh3KsMfzxSoWS7aUtzI8mpY8g4DTbGQRkkTfihEkfOzXl2xjY62EqOtW7f2gzLc5IZlltAMt9QJn0vsS5/B3hb91diew3rNpmjeZyREMMRIzObc45uwFJ7Rht1kK7yoURvK9EX/CYoek1rS1KEEmgRY16gZys7ONIEcCBm6wpPNsyKl/hMlvY56NDfNGRRkROjIGw6FIMXT/4YnBy2JEJI7qYsK8ZjJvRQPOupw2Qq7eRZX4nWwOi/s1Gu0MCynnycD5+WnOy3szV92qcIg31k0hFWfjPk2sk5uo1K51y+YIMJy7l+qVgiy5qPwb/GC4CsHcfRFkV+tfXQrQZym6ISUeA/rREJlO5N+VkRoVoIQwdoWOAguvd9nHmXr8DcJ8OgchKpexs5sLvN31QaMDNfPH6mInVN2LKnkQNu0bpbc9A+N5IoAeX2X/w1y+9eAy1uckQDlGiA6J6ZJMT9VRtTf1Eog/+JVXWDlpUDOOlxu7/Enhr8GqhUPv74tw5rMvjadDP1ufzhvIc6CmLId9iN/dltqDAy8mTPYy6Hjph6a+lx0IU2ed6FLBw4IOSfuWAUVrR/l76akYjJdHpVX+PMt8o3xGq7ptHpK5Q2neIRJgvsJpBQxnOccv3fpvcEZzfc8s4vTG5jOyDs/fIif7R1VPfKil8wksZ/Rc/iibVcGhPSEckHGFNV7q3RaISGTZ2/GBpzbOLXP36AKOSdV9AZFqEFJwWyw3yrXfI8Xkv0AqsOCIYDj4DvJ36h8iIBloHJHzwFBq+E09dFglLPqcWP7uxM1nfb9WpFV11TYwEuySHNGixyzkhSZh3ka+C/FkPE5sfNfqXdQFPxsWiYBrdfcsyPWPj2402DYEfbhTk5JaanPyOq+kQ3Rx/V8QkIbzGY+M9pSTop4bJ8JM0PFuG3onN7Pf8keVxDWA3NhMDGShATCT9YcZHhsF6qb7AAMznzJuLLkJAlVZaDCSMP5JWgdm/Thxsn5S7nyB1nq2B3sVO6KbygXWIhgUEhV8Io9J8mLOt/HIpemxA/Rf9s6s4SpsFsQTeiLaILsw/I2VCglc0PpbAl1io11HHNJSbwBBeBC4zaQrw7GDbKTkBPItf9spzRghgV11ef70TX7jWZSxLv4K9bruS4NsFE5UBRWKR8D5eprMth9ehbdjnc+EwNRdWMYtzKiJqR3R04yRKbARDoNh4AMukniZKQNJKdLK4ahaJYi16j6Ya++hbUIiUk8VL4kgfsEQfohBl9+j0Ae235bu86D4WMNUSA7P+Ehn/BVdyVwgj/HmAO5fb75aUu9aXvKQCG22NE4VlJzHQOZdTFMlInhuf7wwg+4l/h9kZoWQ+VRio8oEztnifznRV4oKn5spcwctHsxIvDNBi0YpRQHkPzubHcflTXZ9OijLtrHo2IHKvifIvzSN5BK8Wf64Tlmktp9mulRhKse2V8bP6pvz3wGF4guG/Wb0YfrrAuF1a8H05O4mfJOpgV4FMFjE9CB+L8he/d/cQCI14N7TLEt3Qkm0dOsfPgjRV2EHs6kLPTgJi3eNYdyBJM5QiLTYJBGftB95zFUG8/ETjdpCA3D8WexXW250J4f33tqJ8eocZM58eMlyxqCbA3QKIj6gQ1QHMp7HDkpGaXBdRpsmvaW5Q28bdFIyj7ZV/b4EoiDYcghG/XBF1KTyucmqw4XlJG0UThvlBpGT4rf+kqeWLjYra21KDp1phBj1T5xEnvEVvyvFyBdtL/rQp+EeibCvC6u4K7NTd7FEmj3Nr6nDArqWNT6gvLA908YupM/YZGDCsCh/SV/gnovkoU0pdNsmYoHrRufoYP18me0bDTI5NSgIj9oMlG0/fKvwd7ZxMN62Yk5UjiZ+4yShIJISX00534v7339lznhIeu8d5xRIirejsOa7tD9k+lV3ufOtHEZLfgw3n0fMFqanl4v17MGyucbOatmeqVFA+yydyDu9uvf1KBEdmbbhgX556n3zFyVHLShThTyxD0Xut/kwXY0WdndSNJY9DfLyZxo0aVM5zZfwOvLmAFjyqltH/dv2OFzsAuNuhFTvLxVCXxhx8qh0FHwyVCIRgaoRyCvhff+ZniqcKtuL+pK/lnZPu+VLG9M1fpKBDHTd9vki3zuENLNHgIZKE0X1b5SH7y62eG3G82P1N5gbz4ReHDr+Jkb2CV56ilS779ZP57uS0l5Z70rzKKJequ/kSPu1Umah109m36Z02GUWF0fLx2UlMxJ0Pjet+C1iU3hRk3BW84XO25GpCBZ28VG6+kMWeEz1kog6XnDDMPvtbVWWVpXTxlk/sqhgaH6PngvmH24yS8Zq07HEE1xqPRM170iL7mwMEAMo5yFYnbp6hsXtt0DNwLS0UeUO9FuJ7h8+z/VeBFeR6D8R+qh3HtDVGRjE8pr8okUoDWRXV7+o8bKvOIBe47brEntk++ztmfbtfI8Mv9u026nYdhxLkEbqppX0NHIJNaCQQqgz/TLHfAiot3EC2FYLBvLJsV3/63kG1qqGCUTqTikX474KVOAT2hLS2BfWcs7OGF/S8EAsZFhnmMIA1yXfWiN+iAN73DbvUoH+f+yiszPS3fmGPSoiMNZDIqPmIIOPp941Gv+qGBMrvgbxV0Y8j9//UV8A2af9A0H/FOCURJcNjYnYXjUkmOTggk88T3KRawmxJ4d+42dLWteRR1xGt1Seu1LON919lpLzlmpHmo8F6RGwrurOHmvkUeTM8Y2XPI3/31nzfa8NUe3PTj5thCk1/P5S6zq8Mp9nYqfj3prOjFm+8mmkbujDEM/Gs5nxE83LnP9dF/1t+e3Gg/5l7JBXSH4qkdl87+6wLtW+NVlAs3r8VyPrfZNknD6HH6xDNSSsEfbfKCod+tK92Fj88C0C0TKgvB7Wv888PS6tc/mIUX+E+fsDEPE2PbswcOJyp2B2P8m738FuX2+FH3Y9/zigtHfiCLh3SUQ8yDC8uzkR5KkohBnivDm9rCyFkWOxm1qSdXRaCj33QWccxy4iYXhKQxsXx6CoKUHTNVRNvKbvzR3d6iucQtMj+tBUzM79hk4w/eOOnT4WDlDencgLqH/VEZoMkl4eEDtAY9mRW20daxMMJBMgBRDSZOsnl9caHQmMiWpqYgkypcW5P5+NJWOsaNvMTNnAQQMHq0d3zXkHza5Ang5jOlse4BFEtr4plczlI9FBUp6pzk69/O/kfdeO5IrQZTYL9GbR3rvqujf6Mmi9+brxey5u4C0gCAI2AdBg3unp2umushk5IlzIsOoNWYqWXByXWeWL4ut1KmSvuEgSffYo7ksJnVPCVocHpUj9FvNh+66vvQNDnrWGulF8wX8+zOLGyXtjqo/JsOQjVHqd40njlDy9RM02nJ5wSnJbJtneZz9Bag6O8DAgACWiVcTWuq/Cnbk/psDVXpP/Bs+3+zVx+wFcQ1hYno3G1ui0KsOKb4wmXynAkSlR7bj1T8tEBbss0B+XuMLIGtbGygvCSOoMRbiPXyW9jNTPSfDfJUcFVV9pt2S+9Vulz/jed/hZ+xqSzGb9Jt7VMFiIahc5ZwX+DbfSjlK+5mCrqSEBXzwVwwbz8DbkZiSW46UDow1lqCSB7yIW/YceA5kgt677PfL98Cj+PiImaVoJKy+xr/lqDvtw+v1air6YJU8fnAwF18inGK+XyBxmiQ/rz7YR+Ws2+jPBj9WIkjH1h4ErxdW5cTZy1vPHyN9XirMwBlpnURtf7iYM6+csc60wO7MdV+fkSHy6CEf1sqY6bNqODfypvEdJRhbC/h8TfUxW57+Tjh2zO1f/3pOd1ueTJy+gNbvKkeMmJY6y6HHhLcWgyig2GnEmZH/UIkpx7pjbJMqW8GUJaW6cJ7T7y3NlckuIK0/ZaAJrBh6XbCNwbdXkU/SqL5R4cn+4Q6/xgvL7BPMZ+u61lXQ6QVsKTlvBPTPSDjT/XzzxvStsA64sW8DGvqtL7e7seFxYGNQOPPbIPIPUg9tMy7JI34ZMhrbsRTL4bgqEVfJj0+16vG3ygJZG9Zf/lPXFfWOKkcrNcpdMRQRfaFjZTfWs3L+i0Qc0+ntD7VPmKRzlBCmUffV110Bwel8dtAWnRGH2YCS1udsoWiGLeq3YOTPOyfU9SKorHayIcpEofyllS+UKz1lnNiqntezlZjtRoked7LTlVR4eXbGZNOKIkaTIc9JyfgLP7+r7zkUEofKGQDuYYqhzpFDIH/9hsKkn+Tq/Iam0skOzAzBzMTaV3Db6rkU+YMUv4z6ctCrJshPFPC/Ss2uMyNEn7nMkXeMkqVkWZ4woRY3wZrsTtzi36p+9VT/Tkg05RojGKZpgdDGiBdfQ9GmvI+airTgseJtbbwdAzin374dE9zkklZF15c2/iYiUBNZhPDpMuSWeit5oZ8wI31o20JtS7cfVfOWyCh7GDUygym6dzkuv4j+lBPAPRQOaTofbe5bQ+CdAd9lUDdXJTMl7M4+QvD8uXVJNi8ZMk9Uts9xTUGTDTHahFV+rE0hDvKvSFOG+zJ9JHJjHQ+mQhxc265wcOvuM8Ko/uNsEAPqjQrfWAJbunurzl7lR/3gK7/cGDZyqCo/g6lqiyYLdYZ9L+vuhF+jw8LoZNbX8wWm8UZ1d/yvPs084qkK4iB6cIqhDcXeASumBdHhR+OgGBbhZY2ey4+/Ndk4SdVNopZJN1F6pLkxrPi90MR1/kYbVo5WP2gY1brRXc0WD7jMPOZjnIeTzkStcRt7NI+SRQtRoGCUCSt/oLW0Dnp4vn6Y8y+99s8fizKgcDRWR8Vdy8CwknatoHxxvfl5lbleumc4ignpb5H/FUzzjGqrft35N2lVi2wjMyzR5IZqyog0XjXTno9eEf/K8Ri0Z3vBN+n0DW7YrimdEpLTzrKxAJYRyUVTzra+WVUuI/S1X4OATRrFChsrPi1sW4dYW2oiclitGdUqfYy4A/gKmdc5dx5NYWk05i97CVgidMbL7Z/XJ1WsL4JSVyv9SThrReWQkhce/PEn3/tkre+2LS+Izk+W/tIgaklDOYr6iuJNAsfad1oCvVikOg45rGztZf6IIQas1q/mj4Wal0Zd76vkiucz0hAQ6dFS4pkXfc1SX8Lr1Z0z+691X/URdVMBnL2qws+rGzd08ghwiGIUjhgbicaFBw++n5jsIjTw8UOO36py3mWmTqXyfL8k16jf9JrI5ID9VzjfFLEz3/6465fKz24rk+gHBV4ExrFmSEVrlY1GvOtPHTOw6a4H9wFZ73oBFw+sXiLWGtpozmfcadmKlN7LM7bk5fs49lerVH9rRptal8HHlOg+7sznreZ6P63SROJvaJC6+DulpQFKg5/bfq0C03gKOWlJqBSpiXgZbPol0LMO8TbPpvpH6XcwgoHFywhnkVxUrkdnXkX5/qK8Xn95Ket1JuFBcihsdJdexMzs08Dinbr/CISP95fbdkLEnYwrW/VvwZiHGbo9lQ+vdCmV55XE0QN6G+5H1WjmG6axkzcfn9f5XMaYsp7GuPdV7XBAOkRyWP5fMqt5T/oridqVjiXl53eNEoXkWE8z98Eyrwn07uTrII4qfhfXbzqS7up9b4Pnr6sAgfqH02oVDeI1twAZZqPO/2VE9Du8zwaog6TfpfSdxdiqxzmykrDxeULmjL8aG9lqM7/nxirCROgW3Kiia1XedwtxIc0HdpUk8+bASVd75gQC+KaXx3ZgzBEub2m2J0TvhjFCNePxkkfNmUBjYHYjm7WtN0LhQ5LeG18pay7NRmiMCowrCeavuZzsaDGEKrYjzeGsMFFy9fWCGfVy3qyRRig1tkmH7VdsquFfQ1YBXHKVYado5jRBCYJIPvPkv0q47OMgYjXk20SfC4OONlvnaiehHWGTepVL6WJyDaZf73taBGSNevD0D4hycPj1yyATOc9F3CasdKYmK91xrcnltgrBzuzt63PS+jW3LnQFnmO90/9S7FnxoZco2F/vfJoMssL8dqnwchbyjymzdvPoK18+qUv7rpYC2C9lHblH89dynYfN7v3qLFl/tHsU+AlO9pj3mG2ZQ/TICkt3LOyPZZSyqNiVBJjaEUFa1okf6owcgVPtH7n5J/HkFNWQ1XUkv7mhW2kAlBF1tX91zAYsm7ATs4T4d4zsIPsnSbCEl+ypUAsQldvCKFntl+R6D8lGQjeEzkuqcp/58ARZHR9wM0wfld5nCLkotTrzbygXWZozqAyAWx6iBAernrjod8yizXaRqC9NX0E1ZER74RC59+eLfTF1ui730Xt1dYO0I1L9Tl/TKTq6f92G9vK1RidRqiN9KcAqJhK+1a+/fKN/XQK3XpyLTIuqTVk8wOXmaGe2mT4V0jvzs6R/Na/iKvl57BN8jQfxjjogNnvnGAnNZkTW4zdnyU+llMePHu9Zx6/kax0kbT8EgIV9Ag3ClVyaQFI7cO+IdhxyMNvE2vMGj9M2AhQAApxz8gni4MHWXY5MRt+H8gesU7esLJ72UGo+77PopGjtBlCDvGH5gueaKRTemGrY8qHXJrNjUwNE7eXAlbH70mxXaGLPTIN9mRvHDVDM1uQWlwHazkDVpmN2FfvKUyHz11na61hDQRG9Tfz8UueceROppUNzD9yhskKiBeFrkeKReT5uOp01Lz+I/smGk2VGRRPG40v3OUir1XSb5E471/ivlUbFbKpXNoy1jcPsnE2Ur6fzhnIC/1XZyo7nqOteiYfjssjDxE9KYEouZqoNpnNuZZ9HtVSK4zY/NNMvkb2BPs9Rv8KXbU09ZZhSf/Ao+nYjc6u91kcQ36xEa5xNaFgSCN+l/3KNYd0pQjXu1vEpms/UZmSY/GTkh/fBAsMzyMGFlWNsexsV+u8J5U+FeRFa8vleQtNR9+19DLy53kzRNwNwEXgkzz52+xkkfbzlKYFu513r66IGBX1ADRYxeSbOPLk3OTgPMkSOZlqYp/DJpOe/Jp1JF6dUQH4ENvvpS1IENaE6086stShn2U+z33SF9bWDScC5itEg99HIxLZ0VzyGhqgsP2yzQ+cR9ZG1+Awdovi55QfXyTipJZaX2fs1GgWNLqNt54Mj/F+bk3Uw5u21akoUkehQwzb93YJZ9vpIGjywEhDe88wr9knjWPuAUOinc+h5Uo1Z4mUFrqbYr3rhtF/1sCNKIPwkWi2Cbbp+CnyHDSV/bIUWcO/3GIGh8guaNnaZX7fKu2S0KBdq0MGXynv2pcSd5T01ds+KjS7PbCuNgXSC/Ro0qBXECiETY5CfFcASMqIY/Y0bUYMPIo+ncLvHSnHVuGkdfA9/IPxJ8OHolM0uIgcCJeY2/xrehkGIC98bEB84pmTg7lQwXAreFmeNso8rAhZFc0M0Oj/WEE/XIWWARP7npHuGmXfBlrk2l4yJFbqofTSZsK90HcOgD6FGZvmW8AGEaWZNTK4xIXZ1TjLycnkKG1lfIDrryIkDP2no5XUMPVhDx/xFPyXbrI9fE35Wj1uVyHpprrzrvxE3vEbCA4Wk9pOV0t2xZa+rv8xz/5Dgtgei1H3BHb8xywW1nFUtmYWvv9IUCWD1SLK/48wkKvXt3unrUuecn+fqvw6Afnta2C8ORXFUjevEQIdeDYrjwWmdnkf1O4NNxxiOBbUAcaJVXzCqcIviFO0Ud5yaeJdgJmdEsmu+duOM0Vc4JK+TjCwv5UVDVBkFMAnCuDV5RRZcfLvSSmymQDgaqWVTKZn2Nz5mdXSoqd7h0zPsyZn4Zz8gHGkOX7xtmPV+McewJLWx+6P360eEWomIBtOTzNU7GQ/0ERxo0M4j7/i+yXPeaCBIUA3DQpVJfHEJqx/NevJlkgd7j5NXdsSq3LoP63KqNnfP39wakXKpc9L7uFhDkNzCNEUVAEPJFWB2UiVh20rzP+wIJJ6DL9P8gAN0EMuhruhC8m/IG87ujM2B1qsfh7dFlodyYPuwfxTsaK+JeUQ1+f7NzlNce/jhzSvK7D+fWthKKKccKJLN/iJ2jd14nYCXafeZjC3wcfULYkWFtSp4YyIa1zERZkwxv9YOFF7qX58yf+J7j/oUs6r8i50fh7ubQW+M+k177gB2Um7tXnZkarq8nDb1xJCPVJPBYU0229qScEDSdW6AG8PQVfCRCG+8rKZoJ8sSEWGzNe0n/xRylHIRBwELqp6VRzSh7DU95Cf7QLtF65iykM99LIFP6e6FyBru5i2RtpHiC0cGPsk4iM6pJIG/Xk/kGa0V3A7lSUhn3JXVJOBUFOL7ET8U0lv0/OraLyrXxZLxDDDVry4sBfByaPf++xOSDMQtnDDr6tzhiPrA4EQ5PuDcSf7cXzGpmoNPLezbMGbI9VgWdkKz/BrGuO0+Lte5VbGQNMJZdMeLfKRPu0OCO3AXxG7iCve2h69PlhUqhlP2l/m21BZ1311t7tobqdgoH9isNhOQsZfw43TWXsL6ChAq2mZAlV9iXYDECkosAjWB3AT1eyEgl9racgFOM7UqJn+3h+nHCz+grhIDcALxkI0qns3Q14IEe123iqjl/vziv2lrEonNP0GVSMHTUKGIaQNQRHVtfAYTPnOlZCSmSCUPiibFY8Y9bCvQ40V6tQgRvK8fnosig0UjjkUVKdmEQloTaya7T7KQpNTF5kuGZi3oqzGNv89ErheuDMqn54ljLcp9F2c6pknqYrtw4CK5u5XfwLNIdkr+IsrY6jXNkFD4zPFMyVThFSrrqc+Gf6mz7Lcq0zWauH/8R1wuHx8bp9d5xvOYidx0BHbr8SPTiKYJzOOGZEH8yGcoZCYftGr7+DpX+ErhCPD1jNqxnFp7aPBfS6Elz4vUwaf4/AQpU53yUAuWzebM4kkzovZu+V2+znF4rSUzDZWwOJGGAC8PefAJSETuIkfyM9LtNJcyqkVb7cTEoass/PktLo9ozPzranqM+OdhZKpSZ73a05fR2+k09ZbNZxnHHCOPoS6SJcmvOZ2nrip94sP5g/9U1GQc2xtKIKqe3N4EBotGi3BEpC0iJH8UjcLm4fXVRB60OGAQM7CFds/LDh96cs/gMzLyGyATW71qiNWFp6eR12UF7x7QCNNKZtHUqKgCO3jR6YgRmeWarsy0EyqjrKbMqOHHpAo710mafd2/pnRWN32DuG8XqpeZAoxHH36KjIq/bf38oIWBW5ElbevnQabxow57hotenWR/j5EJMj2xl7Tpw9BlR2WH3Fze0/WHcShMsRJwS+vYPyoNB+rPRRVGqttYyMgprE3CX7g+9lvEkfz6L2tq0MQvRlh8hXEe1bDIzz+71ZfwPh/iRAQz4cW1q+bSxjZDzoLmNAR1iaK/wVYKBWu4zGVo1G46aDY4OUZsZ3q2C5GWf719XQ91pB/s++CZPJGSItP0S1fs0/nwFj7ypGVM5aCcGGosTlQshOQyJFZq3A+gI91oqB88HpLK98H9TUV8lu/X/5rZTqglR2A3tkAPoV8WnDLP+aGsjf9tnMxXH5Rlvoi3Va4ntjQAabG6PsbpLIbzNTwvC1JJ+mG0q8783EacCbJ5FPXTIRPboLaUcBwXxYXF4C0Dj6avQB021hXsa55jhRJ1zadlvmYTcfNCu0zMeK24Np/KzpJWUdT63QsnIy3TNBgCt0XNB26+gGpuwi/EjyO0+4jXuKo23M8MvdekebONJBUBHGR0WEyhcgxsPw6Qj0Fhql8fyBn32xlBW6HMorCGvRIIb8LVP7QUMEe/nUXgveuaIWn9BFXcYCAmDm5eIvg7G6CcRuenXOmf8y9lebI8PktENqHJmx9o68v2fNKQEZY44SLOpen//TtjsjSWncts0n9mKZ3/T/Oh/zJe1aqkZOx5MEP5eeTE/O+rV6D+z/UKCPK/FiwgKPa/FixQ1P+uegXs/wf1Cj74niv+1SuwT0ip1f8n6hX8/0u9gor9/THhU51k/u/rFd4f9z/rFQanwHQv8LdjldaxigTbZI2vxgWOfEWt+hGF0DgxBjuxHyFBqWzy8xihsTES1u8Z69o3vgriyqBKUOPa+P3ZZ8VOelv8g3BAJ4IDJbot38kJpU9yIGVzh0sbWubJn2l9XmY6XM6Z1HRdF9bqe7O68v/6a9fU7//w67P9Dd5ifoC67ugfVp/USDLQbQrNLVfYeA7RFAEsd0g7tMcNwf851XI7VdB2NBKfUmDhS9dyLitGxl+MbsjExUoDmlh5hQYfNootd18x1l1bpKclYv6o2/r670VslVC1q0azOb8yMIkgGeDcsUn9+OF43/WVXmoboAUCl4YG/Yy1Qx6dX+37Bvcgf4Sp6gyDuElToMGJBBeZNPfqrb0P/L+ZSryCgyuIlPHbc7HiLtbapCumIwyUuGv/5NR4C4ZlBZ8+dvOgr62bVZMPq2DkSyTjapvbGPSlT8p7QqO5Y2NVJy5qTqvQmB8I2tX3HlKmM17S61ZW1BZZuc/IS1aIk6IwEIKyZB99fqbu8P8mhou2QW6IyUlFuAaO+qiRvIQFTRintQ4VQ7m/Q7pd0EWU/VJipFxuTGRUmTEw4Oi+N0x09lVjJj4HiCnaYuRXq1AId0JBVbuH3TnD3X7mXQn4vAksS7oyBNkwxSGLIlkGdgAi0RkZrdXHW6oCQVEUWxlr5U4WYba/fOGybN9V/BMu35gm+qVNHTvUZ4ZwHQ48JFvXJXrFIR9V+9QQ8PbkAnCmQNP5l5mdtdow+eBMfoMoQFIO1s7bzOOamYRNW4GZ4HR3BeNgzePu777vmnb9Lju/4xP/EpzM6imIZYvM1R1w05xppVfqxubuyJAEJJeAI5xtek6g7xHLJ6x2/xg5uP6ATDJJ/Fr/q4wxJKAgvggfCMfPTFHimTmG5LZvyvCVeHCDkzpyZh/oetIH+QqTG5KzzaKGkA1YF1NHDv5F5/BxMWRd9tLN77q5WSt2GU1jMrhwkOivgcrg6IUkfwZWrx6GXttbgb6hUkLL7llUBPErL2HTyh2cz7qHgNKgaZuuaViZINQ3JfyW0+D2XUMCj4Pc+Y7CtLYtmxnvrYOTWqF6weDwUbzY0CJwWLl7NNfmZeVqEs5TzI2hM2/WIib/fM5wCH0/6xB6Us94xb4Ic7ar4QUB6XA+M0cRzIouJ2PX3tbE+xRQZjC5BKall6W79abs0zfEmY7dmoOpdyaukD3qKt1TazY/UJxeB/TH8aWVEfASBlsfZzmMh81rzqugL7+qmnu+yQs0XpEkNtvJYDDWYtJA3fiNWyq3khBAWHIQuMvKawLVtNcHd5CuPlmdhTQnzdXKuVi4YuDJSDlpfJ8D1AlMYb93Bg4m4eK6avy5YqbZPkx/hEsPCVP9rWqIySqE0gp75dL+L8vd4bPrl4Q0UnvFTKfW65kkPww8eQxf0meyDNHzMwnBUlvvfMQmZzNqJWtAvv51FfE1P8DI3s3yqrYP10rPEpwT5cCjckpSrJQMQShJsozm/pUN03y53MMeET+KJQjWsAnUJr6BO0W1n6Oep+z2bmXZv3hB/lS8I0C8aZUtT/lAkIS4tBXY9TcqxxvuuGuCE2o+5/0Nwp/pQs7OgSgiKlCseZ6kBSq7xPcCQocPzzgl+opFNTdME4eXZl6ToLZgF2m/+IVRHkgOEsDr0wZhglWZJQFj46m2hlBOuw/xgOjLL3CU5eMoVtWi9DcJZIy9++KphACs+vAupRSa+Z5E27IzBEadCeLcfKQaTFB0a/w+QtpJo+PUF+ld1dHeYdR7n2H0aQr+syK87sXFesp6uq5VBsdMRXircwULjOcQOvE2MPbhNEuXMbeLkQspEo/FpZ9SP4R2ll01sDXj7mhdPvPWOZHA2sRGLl0nYuEI/YJ0dLPKw1GZ2lfdSi8TTNIg7KsatEgQEBgr0CW+M7CuMKspZ6pt9nco7psg3yUaWJjXMpEYdoAjzJd+bXNJsTFKUqPpsfy1Dz0MiLw+kvRj9rU207jQY94r2nR1UxyKu7WgaxPRLYJh1GJg2OiegRFFWvC86vzd4afSfM90Q4jhgzCyd/GY7CWjBfKKRY5giJPRMXmJHqdSUmJ4N1xgkDurtCr8OdIh+vPhbu9XOkb45yqQsUbnbhAeHbeAJFdjB4fZAwerZfdSfA6yP/6Oj4f/14k2WjqFetoE7w/qvnJXQYQd9E5b1esjWodSaiHVuScBnSjh1lkkIa/ChzedeaJvyb934eaNPjzXexXT3wU7HMSsHjx+KC49b6Li0N6GOJ+jGP+1vLHu37sx4qp5PtzKoJe6fmtBpTUuT08NV+cAXoQAhE73xctC9YqRYhDnR3/h8EUTXfeEmFEj7JVfpSL1iAupoFpAVkFUpo8k3h1WX2B2WGISIhL+0o0I+/wirFj89RlOQVTHVNfUXY9IWF841TDsjoREOaZrE0cE5uiuBk5SL88Na7jU2zdXjguTbcS/7paiPyOLP+lqF0wx061c9vXYX44h0APNPzb85bXFJku/U4n+Kn9PhIy7qPCPukM9+UmtFmHYfQ9kjXqR7rIVBb24ocl6iIHwCLlVFJO/7/PDsVkPbuolJCQBGqazRPJ1l7aB5q865YvwT+mz99Mglm2nKYb+JWSD7MeXDd2kyDg00erI9v5C/ubizjNB977Hi+92ioHlUffOHmX5NzEk/rbttVxJ0TFrYrqDbYotGoLQRsU9Dsolml2Dle4FKRD6m/w5qdI/WitUaRH81CTr71ejp36vQnqqvdi8FY79og82KwLide31tDKgMCzche/9gM+81CjLFFW9vnZBP7/mb1QDiM6CRe78PIv8Zfs7kv0bT6TvOUmT69/oJfBu92Vk25FFe4qSJwOOr9Y5SrlHltfT7b+MkoxIRDDiyb66IvCMr1zPLWM1nEpirzjRP12UCLiTpinv6GmwU6fq1KuGz7wez9tpRogZgSNWLBgFimsF5F4eR9s2pvueS9NumqS6BtNbPa7JTo9Sfj6OaZ76RYMee3+PbC2AoL6lZRmFTfzvOU9LH/6llIq4wBxTAO0/1FcYkOPyMp4rslH5319HS/neFRmF+QsUg+ZGY5bnzQ/SglcdbjFWU9+Qe7fE99beJ/gyoydn4L/OUCbzyVkQ9IrFJ3Lodn3dyBOkFUgdnHYAfvG4hW5k+quMrKGXHNu4se/W2vGvsSIwkrOtIcwuneqfGMQg8BumgpkIY6KK/N+ztK2lhM1E33cziCYJ9h1daDdINCdgRvuvPhjA3CESoH0DzpXitug/dFeC7a73fLNScSkoOIGM6RWHw+ucbo8JMQPa/7i51baKxWbYzS1UUCX6q9B4n/FTx6x7R4z2xsvnXD0buCHZ8Jk4N07vij2PoLji6sjPPL1U/MyC8942kI3nFD8gBkSfS/nPkMcMb8LkLBKhjLmAEhMt0rd7Jb/cEjzu01ImF+00KihiwSbvjqS8GL+uzyDf9zcDk9hahfKO8hD0fDxilbLRT2E0zwksk2IEHpWPDLtIsSzTdKHr3+9BDVrgkvTlRWGEDwDJnndfScNm1GLISKO+XyOUQv4UEq7OiGGzQ8sLKnHeR07EhCo4Vd0i9t8B0mJ8vfF2uAD36ZqCzQrk1WPKZnIqBnI+hGWVDrtdS3hiJGSraOIbqLtINhGPX8PEXQz7eUnEXssr2TR7kcGL+Ct8uFn44pAE9XlYhy1N+t7A44umaVv+pUlQ5t/gIpqAqTDkSlCTI1J+Cr6QCLbmOf18NvN5vwKkdZkT3TfJbPYtw59RuqcqXMeuxk1U4V8tJ0Ado+hd4EhTot38MjJ7pa2ExDLowxTMfAhy7ad6nbbcx1lWf1fnzR/Z5fX+Me+oqd/im6Mb5vtif8Gayb+qcLIt+Od0tatBiO8FX7j2xxdvk46uxf727Irb6vw7ui+9CkqH+uXGycEkuufRTqKWt1+eYBfw79pR29ZDEoFwKgA7Fo0fbARnC8erRBYijHoNI3W9+bMfCugJKdjIW6yQQCPuo1yCT7HrMEPTJHoM+dXTRzYtvw8f0OOoYekgyOGkgoTDV3732ilAwS5PScsG0gxm/eznIQarxPOMmkFwsCa/+hQzUySlo9/RSoN/XLympaIChZ3XrD+t2fjku9PwazC/pMr9ZDwBBaAt0l57QK7Ljog0rK7HlXA/8Qa3YuC0R7/EYSEMMgM70Ezb0bzMrWpoZMr9nDK3h+6KgqQsH78PssDGQ5pugVWIg3rNrxeYqKOtrKE+x3XXBhwc1e/DqylJ7Cr+WsgdANDB+e6Rk+grMIZ3sQE6XfCDHoH/kMhW9t4dpJK4tAP+qm170hzJhPUvo83W+I1nUhIVlvY7gQfGbaknVOK14f8iPwo5tzRKi600CEa7TFsXxz9fb9PPl8D8zR3RI0jJP4xI43GyBC7ToJtB534PeqKzQdkyFJ8kknkg7C8bDQD3wPsEOKqd/zlYi2984IIIP5SI0w7c6/qr56n/uSVKKRud1PG1JUIC+q8fRSdfWhJumsrmG4Lna7696O6jqGzb2By9K9BddjCY/HIRjJT6tHjtL4/QupGXfv1LBLiH0X6sBxlaBpgIgwQj44l6pFvj7CUvkLBm0yTqYIoLA6DhBicLweH57ya7f/48JGRwYfKL7KEcxPqpQ0pk8tWmQss2r2fV6tKdRBkXcOS+pFM9zfoT9yxL3oJReMf1icdveafP+cr4uDyen4zvL7a8Uuwyz9CsQTyW/fVpY6eU+u4eLnsM+lnCI5vp2j2jPmW6MPnd83k9ZTqsiyL4Lw9gNXwH1C6gmN8nfvXpTRrs3CI8x14W+zBJxgZwubhyNL6quwkwN5YYjRFg6brydfssZGpaW+O6QbLBlKHEwRdFTMci/LwSE2B86Fbfq1n0RRFVESNSuYnA6ZLinYHRkz1f0km4XHytvlYDp1sUmo4blR4Li3aUFrxcMSAmtBY2AmCL2EeWX7T7F5CDLL0H9nuAtHHgKU6jiNjlyV+CZVJBgM/azYrJzjJfbtoE7dWLIAJ1V1w7KNKrjTxysveovIW4f42+b25GI9ctD7MfvS1+ii25AleI5vruYVou448RabELvnegYR5zDsLvF/OJfnXT1n+ybGpMxw7tbepyoADjuKIIQuObw7Ytg7a5DgnBholwhn4u0NQyRauz2nX0T4LkaAzKe//2cnr9BnDfK13zLdK9tyhZr7nupfSScnSl3VStX97DR9sMIbgsJ/5htXdwMk1Yma/RwrBQNZjeSg3gmdh8X0vcfmrOS7dvGjCvfs7DdZM4GzeKVnLMAgoOISL4ZHNkDndpkzLYoKKjBEdeH7cYELcbhCyT1+QcWlPXkAJ/kqW7XuooXVDEB1xC7mt0NIjZ8ZSrH547WTmNa7p+4xaMhX8OyHqp977UJ+ee3Z/BjqK65fDLIo+GP7RLLDoJ5AH8boi7YYgo0sNxzyyVD7XO0J2sqW5Z8r0kpy/oaSOvBTpktofZZYmT/rFSC+5wUicImB/F47GPTEDM7BLic9Bs14IfCDM5LMJoWdxGzo9Ykq+IkZKy4iLX3fFMdSAg6UFJ6o2l7aCbo7DFeOCV9b1QrqmWSPx5guHaARMnugFXjaIwARqcX9gKqRHjz95iT89lrly/TRYBrtkeP/ixdZNnVBZhx+ZB47+/sb/lDaPgZs27kEng0Wo285AAZAAgVEYJMc+d5E69/qqzM5c1Z5PmW5CYI6rery2CxXyfouaYvfmLtzaFL59wrPYJ8CnOvoJ5fiM2p7kkjkvVWzwSftacC5my/xv57tKZH0z+MxWgVeRDiR032RKFCpv2Gf7Gnse+ZVan7DbkpvBDI7e2Oq/j72k/987h2VyeQYdyNwOtIKbJdS8j2sSBEWKp12sYcfkQlQR/nCVFRe5q7PFk3c1bYsxkIr/a5r80qGdVlxHkL304Q3nvG7vAtfyup4iwn9G4ducvwkK9qC2a6+H7VkJB6aAm+mRMszShXVlxQZNzxU/7KexG4x/XJYMOZKJdBYYoNKifSOQIw/Cr5Rf3pxykUSRofhRb1zil/zswKFzUif5HmKwcAwHJ0k6ehlphhSNo2vwrT7kocMSq0RJ24yUjAb45J4xLLLpfzeBUlvnJvfyyiV4AlL9vAv5TLxmnFRoI0BJbcGx7eYHahMD6npwJ/ZdfOexTzLIcCuloOIO3iuY0oYiILOmOP+CqViATkPErCQFL60mojwVii5HsSCg7kUgxUYZlpx/s+dNq0MU7c4cxIEwZMf2I16b0F326qRZ6cpAoQJn9z+NX9jFDkTrFv9P46FvtahBD/cjGgFmaB3NiDCX/7GboqaCP+TuSgp52enpgipLNe8hzlBAtzudqk8WABdxgdvqDMXynunwpd2N9iZWd6FVZD5zg2F2OoKltWev+0jr0KCjvIlECXfwLoyarDPlnHR16K0Nu0ByqgGWzAwKBLHLg6XXvRR5Ias5XBhvK/udc3/+cwYZbylFBMtTnwvFJGsZSxbuNPFtF4G4DAhtsxvMtL/9s+hliAZNnEGfyDPBS8gw3us98q7gbS5KSUSb0NLcdLZz2tdrFRuw6y4pWvoAzBnGFGdwAnxx5S8sgIyPdZsjDlsbMdZaDvxF/wuQxEQJGTFQde5isuSKom/z8QkM4E6Dx70ri1Z6qBEJbHo3wdU0vAZS6sEH7gI6pogOYjINn+tVnqJiMDvlFjo5ij1fo+g36HNNkqUIVb5b22ahpXx8yPoAy6F4a0ssBIetfRNLILVcTSOLhl9YNTsoTVZ0BSCrkh7zLBLMFcvMoO5UlsTBIH7fY/6kUgU0jEE8//76lwCnJ+huQJ9a/lMqIDUOK01TSV4rXUvSKng8QpjVFBbU1uXVo3SB2PnUeYjN5CDrG5l/ZQwn921qyeQtzIaqf6/kgUs9OKcrARE2Bo02xWT+2merfGSU/yVRYB4yE1Zc92MZUDe912shr21/ABBAflRl/GCI9SDGhJQwvTjVTBEOz2fE7nd9UkpWc8sYLsbq/jQ1Vzu84fNXFLIjB9VWTekh6Kv3dHWjXDw9hgRQkcLZiZh2Tw14KCGd043QCH2F+BeqovogwwIBq2PYGe4asuA+8/CUTlA2+maSbyyg8w9TGb+EA+wr2bs2/2ZUx+J2ODc7fji32THtbWEaT6EXnK7zkrh/gMKAeQDkC+dSt1Kox0CQDkNcIdGF7/cZNz8H40tdnB30lD9NOnwYHudYH370IUYdL/q8pmDjv6PzStPdPXd7rDL6CwbiO/sVAM4xyeBUVcd2UxsklJ9C/2QIl/3Wk4S+cYzInpjXH9G5nkc19bWkzdkFFTPN6NSbuxptvBABjBZDz1j2+WiA8cdbjQaklFvX23AYSCERztvH9rafcmVD6M3dm29Jol9XDESq7QwMxZ6hDfenmijdgMqu1pRBkb6hzDy6whGC4i4B7EUWC4MqjR2uHEOyx5znL8vgDyhfI0EJgWMtvO57+0l/NJxINtQaEFghFCcnhGhAxH43AU91gkDrwdN9BY+0H3LLSyZznArNXFPLoNEeAriqkA2qw+uUDq5FO3p47VjsAocEN1BvX5fh5fpITLrc8fIw6scq105vqP51hiL1aNwOOzrs3R/zJhlWucltpSpLDJ+03XuH2lBLNY2Y1e1nXa+WMGZrsyaxk5MtFcE8+fuAM+zwrMpG0arCQ82O237f5vJ4fbkgkcCc13vLR8JWBw1deAUcY7oM+vUEFenh8G+w/JQSeNquZ5HP90sRrgNNZ7ZGQeyZT+IijXk2AXIjreb8l54m9oEcvDBqQWgZjl2GUYX299i72Ui9Uxii999sXpJocrMDwj1BmMY5/SohQKeK8lSwOEe8vPtEpF/Eb/MOnlqkQ2se1snybpq2e6sEkF8lRR5j4OwXsWzaESbMm8YRMRZ1RhuAe+gcf83rChtCUTIqnDYT6jDlqifchy1Tm3qC9xCXs3xO+9CJdJJzSICPx9wXlibwe7Og5BoeeqEsPni8wBsoQbLGl/y3KZun/9kgDSNcN4jfZdl1XthXvZmzI/4GBLIijoWtwssMLwgT+iqI997YU6fK4O+7Zn/IzmvVl0Jgftt87D9JY294dS2chWQRO9L+CkWAPkO4LZpkORRiGGpoHN+xoiGUmX4iir8FePuyizHltK0JFsJ3LJAtjAQ8H4U8kByedNqiBeTimaingp5yeZcfjrkbpIkqe9MTzoKli/4ygRJqUrfRBsYGhx1I+G+0m82Y7UIhPQDp5/QLKS+m8haykf0oRL1JLZFO7jrIDnIP41TCm/kRTVQCdO8hyQrcf4VDDhCNT9/mtrwwa+sTXtUcq/rqsmCZtVZiELMb98EUGh/a6jNjO2AsZUehf5oQr/qwPgMwPvZJBRpmE73742OShQfG7BBanxSftiqIFAgq+Ihm75rpWFkTZN14jkuK9CgvJ8XpuqS4V1zNGQy27w8QQP+8qij2bKxf55SU3z3FexGmFdnZrqxK8TQOKK235fHl/cKEXY0v3k5C+xSA8qwHia5/ATaG6e7Hcuzy69FeR9ZLm0ayLIOyY1Wvvkxqrhsk4In7ay1U2djr/tGe8tPtLvL0fNtNJFXamvj9ROGFlnsrZXdVBQqTmDiKQc3zfScCgbnvTkofSitbQhmhvPzRgG9vVRfTFwFz8LC/vT7mq5ShTsoU4FQ3yqO0ym5n2OACPo5TclNRYtiDaI0C5FTonR1tss+o0buFRf304+QY/5lQ5EWxXxtmtJ3gtsfgpoRnOSpaaH42RP9znolioB70p2c/u1numhHwHyhjFUEWsv/GVwBn195CXi/cC9gNjz67cB/+A/TfiwTQ8NeBQNLrKcJrF6ddzooPOaLDRDlsEdFdEjDBawBGIGAgxdZSG8GJlOQ0EE+LRy4qPKxAIA/vcshFCME25f9rejvsJLBwD6Eqm7RGwo7tKGMdfRRbCjkAVpBamO8nmoApxW/+O+MzPa6RlEpGe1/uwfRZH142DaYnbJ53miEMB4MRdEE911RZoH0NYKkAd7b8IA4kikK1U4BO18UMPZYfaBRNhMiZoeUIxFURDUVJ3k4/sTBoQFu5iX3fzt+qldbdCsoCuswQVUUFIZyEDdMOmdgf0eUgCKvR/8CMCtXQVkiz7EUVNkr1xfh/FvXddrZuii9tDt87QP0wna5ngb03tmd5GUHIXY/0QRSQviGVTA+4gov8iW/++XJlay4EQtfIdPuB4EVsPYmJO9LmmRYgteCSlvzHO4Wz7HZX9rehFV1RpgB8kB5FNKTpsDMeR4VeEwoZ8M5Vqz0c7JOXggZ5eJBwS6CZLH/so3Z9c0H90QkpgkgDEVgFFiEwLVA7ohCD2Lrv1JNCbYt5KxVgjztZ/F9jkj7k3ZfAWML917FAkeUkBVBXmHUyw4zQr12XjTPBR0rC61GaDtFVS5Jh+uCfe8lzIlnoyo+BoA7pmiTYnIEQ285HrXv2cjtAtaQ4SohT71WW49QBqQuHkCtnc+KasFKOFiO8mGWPf/Qoxaf60o2qVXwhrWiHf2W/uHo/vmy/ZfOpV/wG7Gb5KlvLkCTTZn71cAWCLYF66aGDwlycnc5LuyaXpfNPFX4GgA1s6DPf7NEd1RRF1xieom/Dio5l98uHctnhKVYvYcaO3Qz0kqemCaXMHUDIs1oi1qcR7pT41hubCluUtVGwqBx0bfNHLAuZFx/2afnfc+cl3ZP5WSIBMMGAoyyXWte7JsO317F0aPHYYh2a03bu5MAJA9fja5TZdLSrlq7KyzVnFsljARuFlV9Ykb5h4M5AUbIcPgdGZdtjKAfXWgqI5cYJmBEoGmv2y2dl0cAsOrpv6XC9EWK5Q+Y8NwXS2BGFKC/ppImX3n+rtP+OUSzNPcFtCujO8cr4kbYyExfl8eeBiSbyKtONs7YGhnKD6eCTWAzOiOIyFMuHX5R+93fhod5kbwrI7kw+lLg29knkqys1IFJLSx2qo1GrM0OuTGdnN0bAIMtXlMeMmssNAvg7nq/stHoIC6IC0P7dJfyqXrowyuC1GyUxu7QL+tfiG2dnfwchmJGODlVETWXsOUxzDwTeOk2WedaNpB18weRRwRhiMeMWAum9+EsFU1lGA7LCWuP+IqiSHKlIx9IseoroPKH8Z/Afu7xonyavn8XolwkI9IbtlyHy7XozN+8pFK0b6fcWDIIYhNMopPt+VKX5r145nCkNAkiKuaQTAGnkqXI/st+m3bSwhg2THD3v1q5OkPt8oHEswxPez+aYAwNscrHnzgDE1NIU5ay89oMFKorDgOZTW3X57i95M9Fkv7CgiVd+CkJX8W/2pvWAmxNYPUthh4UDOGCq+PAtVu5VCUvR4eqy1v4g8sItflHJjxYBwSaoLPRqaHSMmZv7c4DCc4dHJFwO9s08w3tzmcpA018RhflbdgY3qS4rcGABwgbMXDobqt00Rc5bntzxh6df0B+8nNUnA6895N2QdjdfWvZ8IQ2L4quZ4Sr3ea9hU63FCHjLTUg1OiS62o1QEtBE9y197BSYek4kwrvRTU41cA/Z3jCm7et7ZGL3ZjSndmhZOzgBxbiRgYrdwGFn/AVtHIx5FFVN0gmUxWkLg8jJj67+ibJp5TfPdv3F4BvetvtgYRUrYayvIck/qD5l4xTffiMCNJXUpbtIfh7GTNoNEic7KUG+iQocNLB7j5XtOdQ1ccmeCYjaRGYYRCe+v2zGkDnFpjKmMag41FGL/nVuzDoZ1X9Kp8/3zs8CjJeXS/V5prIEHfMeKSaebJQmqR3jLx1qbK6P+eapBhHASMe3DyB9S21WUoBwu97F7o3NjzjEUA8/zqfmrYIw++iJ1RnSd/fGvZ5tSiurw6vOrOcz2lDvGoMVWG6q1svZ3ADE+s1nRotH+ozZYpnkOCRf6E8MKDbzovZCWnnJbDgz7PxEs1vgJKtN+9VHQ6bV54MVaHUrdmJ7rEVKWyyi28zQt6j9RyjRbTeVDl9tiLIODL2QHagj1XaVCPaY7tRa6jW8M4nOm37uiLtC8VOpog0czP7o35cWVgkFCbRsfR2gcqodq7Dgjk39W/sMoaVRlCtp8UlQ+oDubvwr6wxfpe9pHdP0ecHXyq9m2rakabznr8zLPzPjc2AxiiSsDjANCIg5uwV2Vyod0cIIOtiIi1mPVg6ZwYW3Cmw85kmTVogg4MJgMduSrCKEoxZjpF6EEEHdZNRvhr00DcAyX2NlStEtVnfhomLZaBMg9kbibX7c8L6bqtZeMAAlFpXQXJP5uvIDRrED81QNOnqsCBQnqXyhJ/qu8FxFfOiDbfrekHCPfiaHQBxKvA5oekk5y8xtkMzSfmPXX3vF7hFfxHysB798KHby82HYGF1qMDbeeGkLLtxJjlGWSHgv6dBq2lcnv9RHS4G42nBfL4KhbXh0OIIM6Vpe6hpRf74OF8wWjSjHJqwap6UmRD9Bvo738l8Tx2F3cctKXXR6OeJHYx5zYoo9vVP7o732NmZrYg0yYTkSKQN6A/c9OYenyg60lSDMmmZMHPL0K3VAZsRWkoZ+oYtpImO+QsIlp79Wn5gV9xB8fyhfmM9IrUiaNM/FvoC2UneaWQsTWot7Uwq/K65/5pWRWnSMTqb2GANjboZz5BzDPaVoXicbOa9eMx8OzWChaZoXABtzc52+wmHeAWItd+fLeN9Vv/5SOc40u+1BZKrZ1nYoxy+oIzVovhyTgxSr88nbzMbz2hIKtICZx1cpdtEMZ4xO7zDVVk6w6cNSHQpGIz/5q5yg6uH5gAo1/KUtD/sYPH/IumFrPZiqojBB3m1TPGqZ62XUIrkqfT7mJtslzdmqVeLyOLh5vusX8ZmPr87rJNDOmes2Sv5EOvD7e+r0GMwVRsxc+0n89Wn7DdXRiyyuM6i1EcSPZxfWwbU9FgdcpsTLcAEzn+TJX5XLN+IsUrzx41Lul8BBaHG/jc2WotfkU86Aj1zeEvtn+4f7srTwQjYSY3SCcobzWrQA+CVwIgJdTcJAE4pxAihEvoZPRqNigYmWeHarJGAuBGlx+IgZb9IkohjT9VUr5/9Hdeyw9rjRZgk80ZVCEWAa0FiT0Dlprjacf4Lt/z5Ro61l012bS0ixJSxIEIlyc4+7hPhCl4e3a8fhnu13RL3bQaL5xgDwrd1hQ6Ya8qofcQr6WjxyGUa6jMwZPKQnNor9kNq8ryuWUb07U7ElMDVb5lYVLkAMaoX7NGnoq0nPdRPy+Of13Hix7UE9L6Ehn283GWZtufkT82Es5fQAUgMeqwLYIFHnT9aW0driWrIn8YyMPXlDDefMJNGV2CtCOfJgQklAJtkkMQk2Hx7b1iQS05lOS2bRa02dzgXd8af9kSrQDzK9WsPRTC3hgsEtUmri9v3Vpq3+Bl2RS01D+RF+kSazfCfA9M/k1T+5JI+RlqGez+MzRyLpHZBpkUR8t9DDsOGQe0PH71CNHkZ46wDioO9SAjex6EMrLRdUDI3BsLzMEggEhCtyz5sIZ3ovfSDBJmxT7QKLXrV0WfN8v3Bn7+uQQhAzf+gED59qTwr9h8QZQMKMXgVunIXgAJhd4IqkBzKTsZTcn8HiIeTSU7GPiYLvfaBTmbHvoF0hwhLrzKXUM3uQyia5PqDGporLNW8cee0L91WaB3ckjNcEcoHFYm8d9QDf3LC8MNYUvFQPRZdM3+bE1zbs7/1M1YDxAPOo27ruU3gUq1nN2Su1JnaTmrWE/klCekddNl+J1SoTNI4xj50sWf8W5U6wC5q9HP9hcjsP41XIHJ8bwuaWBBkbWMDpjvzTO+xc7ef5uKxY+CHWSmsYt0cKcZngybzksait/gQBvabs2apLrXquW3w0LNtQVfSq8Hx8wuxCjmLGdJdZCZfuVC6L/5QXsV5Eeyne/7LOwl56PqKB6L9mcbnT5GzG3Tkj02lTV3+8K24ILypihxFbibDYe9ckcziXC7lVbbhvWa1/awB8T6L5A/aQPH1qhbqF0JWUD7LAA6zS+Tk9rX6hpyvU27yDdHU0kuwTcRvzYFPcqJLB3+g2BsiH93sIb1zWLkMkJzFlODeV/YO0gfbbHGESTyDmhHRMMtbu2+tPx8WO/7PCypJshHkG5dndHZxAgQLrOOzk/zO+4AhpYE1sNjVEAv8cE668DOZ21bgutbzSRYAiAVckbF3rshRioq4RpGvcrSsxmlNZcuZeD0eVdf+pmCyGjy/EPoCOdLMhLfMxVaiUmtQgUTEdQiFxNTANQs13ENWkMuD+QMnNr0AVgFhkTWn4hiOA3OyGZen6KAIiEVLCi/mWqmUlB9XC8w63pcQhMsaDheCnvOSDIrUMg/ySk09KP1CDugUNgw1hd6ofAH3Gj9abTK6n/C/C4H6ZXksSAww7lk71pNCjK00n8viDyZkUREay3ESl/vQKXnH23IZGKOdZpSFe3qZN+AWViO8xoi6o2hp+PQdm58lnlL2kYH4yHZ4R6XyNQAz39vGtHGQa8CvzfqRjaKdTQHC0cmexnA6jyXQPu4dgsq9ZR2NoF2ZXnG+fE/1WOekG4zcye7rJdCFcY7RYO6b05CmL/w8oV1hR0AbE95aSAbtgqIiUfyiaaWPHN72USaNUIuOdDm2GKkCBlkUhbMEat4M5m2ly2yyAftgpZ9HtkoYwHCx4ZukSK80vMFrfgDQxn6sEy9998dXxHIANZ5yiLST+h9lW71Jy0MIunRHaKl6tR7IBYYBXlPQazdmYdj8JhUnOTDofgtQmj2HHE8wSGRePxo3iWLH42Zc7rG2CylVt/+8Le8mBW1L6Ldphpqwj3JrKH83Iv6zGjDbazhy7DnmAHwsBGSEP7q16YX+iyvU49YPNyf0DFpbAQhoiaM2C2BCNkZa7XlgcRsjpAjUQtMortzCGwrLEZEu1x5QMxU3bMZtOjen7x0cj+YQzcvVTIFiId95itSO0FKzS/DfSvI8BTZhDfSlfcbxbeeW5bqDxfGPQYG1xWz7wfhBfCkVDNxEqQWq/law24l0gfDH+nkzhydvXzBDTbQmUEBbF92edbr7czr9UjPHhXDsIJ6bRgltcAekcaJ6fVdTVd7MMf0Lt6U/TcIIOtrN5LIxubo0pj33n1uWthwZgxpE33n50mV4JJrkyXzSn33bvi7vEmpEcM31R8VzDxC2KJcUbXLh+NbyQ/c4sceLt8PyKdrTFSNR8K8+7+weHYo3cfHKAG9IBxWoEem4jU9tdBRK9r7ONhpJd1cGn5bR9k+UYjVGmm3xAL8dtDQzDEO+i/vaStBc8AGmqhFDwQkPsZtHwiB3tnZsZ2wWuIqsZsO8QqzziXu/AkuAyWBIeJfix741g3VZWpfiOofhtxBcx07ZY/dQ8haHOJvPJEkE3RV0hc5HG5bl5L6jh7rogUmTEAMRFf9FJSydnKA7jxOYRwIS/V/n7JwFOuIyAMnzok1ByR4/gL7flJ8aUZWAMO/hiDTA5AxLHrbkkZ9RjlH4zUNzXr46hdR5jznMA78O750YPukZVuEPPM0kZ6UGMni4ibr31VLgHjFWcYiyhuWOjinB3fXx8z8cjjLbsr6b5HUVEUg5WiKARFRp//Mm3HQuznjWBOKFNrPZPQwYjNH/CYTCyl2A7mALysLo9EsnZb8qZkXusJcfjPnL23AqO3EXM5AHN/rrcQQD2IjnVsrixEGd4Dxty0V0hL/0ED58VXi7nM/QNPZUB0Xht285i9lqhO57nzKo2dZUUuIuyXzY/97C+YXBQJw24icCf5d2k5LLTLccIki3kP6chEmqa3HUGNttW9obAAc4XOonu8UJ1q4cj6iqM7K8gd8end72UKS+I4AuIvQL2p1AHqg0Oa07JFblMx+EjvaDXwNFuXlxI/TMws29NSwU9ew223HgCZYHjn6KQVUZIY/vyb6Q9aw5GlUIAxoi9hbTcDMB/i9HwC8Y2/STbv+r7RUE/urrfOvsVoZWMmrDqMPlL/Zv9apdBf5cN6yVOFXh0X27v74JkSp1qPh+wjzK3Sy2dyMTS4f+M5yOROdTmsfeXtcStz0ACyzCD0+/g1IXCZgJGociCu+ULdUl4E52YG3bvIgr8BPbzm4RjP9ANtDzRZ6ugqBhkm95Estut8k9SVR/Ei61zXxPfeK7LwDyDeXrFmqMLH54M8iN8SC+LjtlOQkbJ/XG1ETA56RzQx7CJvpVOgFl+fhgVFmNdHpNKFgTYCEug6Wwq7RDLdvTUWmYs6eGyIw0FH4A4PRiZlIOJJKHnIiVgBjZekl+XWsurTI47NjXbXHCsrHB6CWuwj8WZkgqEZ8EOKkZvBznUzK0QiVLuml5WtPYRQo2e3huONlDpsY4Den0JIbTUObRLLMAwibvUX7bib1xNCjl/nWbog/OShTrhhucveMv3TWIifRCL0JchMbLftReE0ifW1z0676mHUcuxd47ITiNhIe9f5lzP7VtuD8r9j8ksnS1hj4ecuRrQ2eI+E+txwC90cDHaEO0ebRESQXf9GQL4Gss9B0ou8XtBQ7xCw1UAjoo3pMihn+fEPsM7UdDzcVQX48Qa1BjRH9BK8ZWzHiz9E7kExN/PPWVKeIvZ/XjhoXNXmnb30udsIp9tWOkN8XPHTDD1QlZOyl7I4nwGLPT5ecrhecumSEruj3XBZFJADYSSmkBbe+H9iPKq3bTrKbVlha/smwMWQP47sOy6QQmZHtvze0zTB4x35Xtgz0GHWOfbMyYifOK9srcAwlNnT5tAjdUWiqs+VFo07s8N/FrBAA7AjmGy4u+viiBsa9UhxSqg+8uG5LjFBvbECfOK3nRvdH4iFCcEJT16liBqhO5MUR4wyP8aRTmfrXYsZAooQMLVJoQlKGLxTfKCxgcD1XR5YEJHbvyYNoDnhHjb2r2Mu2AnvxB5eGCsHiJl1koRNBO5ZmPFJ6TgLoMc2Tihy56d/0v4++P4AvOjV/hAMh2e/jKw/pm592PuViiz/gTx2ymhnd9hsILzLpmKf2fYo9g3VH+xlFTdBr1AM+501NTmCIyvxmaK7M4UNqtTluMv1d6jHAaWH1pCC9k6mfX5tee7EsDmlpgo4UHbuV6Xo8Y1k8/YS1JwXzGLWOkaOdhUwBsToEV2NnUXJdLuHzTCmORJTDvr+COxNJTQBmEy9V1UPrgtxOqSFg6+obDd7v2nu5nWuLN8+ojotRHSrqluSjTZvIUCBSOmCADmfsBIjANRyjb7rGgJoDwdc/Bmm8NE4tXj9+WU4VVcz6ehdcw9f2KLS3/h6OLdB9qzeg46QAevBU5gp+gaN+S/xB58+8nXYw82DMDJZMVouS+T5trfdb3TuLH4tsg0EY/ZRpmxGFV9tWlzIFU3ryzlmlDOY1zWyLs7A1ANFjbQdeGR+KZNJ0WsvrIhTHQ7ZV8RaDz4lyr1kV5OhdPaP8ZfVIHxsvmlAmgnzSsn2UHtaFEov/zivZqDvnofSbwLQVYnuXJlt39Cr+eMNejhlLIKN2EIJNUNbKNbCIjXAYiaHGebyvErxg4rq4M0f8VVfdtGkjg9Plw4J3M3eCqsSGAkr6OJ+OuUnriJkDxMOqONSYfME/IT1QVpT6x5DdqxdDwp9EaF+l2c2GHCJ1oAfoAT5jg/n5Hn1t/pVSfzG18ohr6EgcjSK/HKX3B4JCM6vBn4PwweSRTvloiG+t65wEaTte0sRVL4mrg+/GMdtkRHKbk+7W67nv382GC9++x3Ci4ALfrnYT1+951ToRKMC3HzbmvB24v1wG/urr7VXYCl1gLdWfpdBdm0qzODLQ040GdXO36P3uOn+hf7Cl7zU6croS4iiAtgKgEkYTmIsG3PheHwKk13/jgUXAuRBheEejQbjcOwiRx3H24y9eXgwT6iJbzrkVm+S753HLf5M6dSHFrTehgC+IUAcOzncKuGelx9gWqaRhsNf2s1rCH+INBk68o1ZHZs4Hjw57FWJQL9C0/tOy0T8Xq/p1D+d88EbT6KaN09B8UhOMHmboiw9vMYaoKHOJ2P5xnocVO9+D0dpoitchSRu4wESiU12oO4ty8B+lqRXUN7ao+hfLPFZes+9It0GuKvZoSV4ouKLO+wVt/8JSuImnSDnyNcvnAXD5LB/48uFIk04t+GhKikiw7TbRzSgD1yNKS8KAKDWsqWMrdG0evU2DXUgwHwHhMYg4hL9UUW8os1lBvPvzEb+Rm02adS+DbdtJs572gNM2jhJqtMui0I/bu1BLVwee5b0ToqGfLfGVX4UXVc5VwW+4lOTizk8HcyBKNxWm6DMsuHTG7goWOBL6qbedf3Wv1ES5kplGjweC5lq4kBhaio+d4hqYp59sUvoESn19kaJcFiz8HCwx98JZ6nCtnX6F3RNOB17y8m7xWwpF1ka42e6b/wTI+W/ppKXs58FMTtg/4lR+Nm14aHXOGRksmb88hTB5nQsDFNpfemh4wHyIcYrYX/ifbPyMdGtdahzKDR2S2S9jaILuakm35g+FbfFG0MgY9lKEuogwPCFqTcW/IqplEKK856SWoqQlSxagBgvPIN/olZNkY6Kk0jmPbkEMv3+euBXNV0+vI96S2DgNyrdCaLYDbGFFCq5v7/UHIT0FvTsFGmGeM5b9mK8tb8PJ3so6x6W58i006OkFFvEZEbdhr+o92ytRad2mO5RgKAI+0U4YBk11SJZA92tADDqY2w+l+q5RrRA+BYOJjWiAcaJb02r/xXFEuBO7tPo1yOCXR1D+J1oT4vDhmlYRe0sa1WQZJ95EN6IIxKYaFpj8Bkvko0PVWVDELPvc8co+gDRtUOiuNvmasu+1/ez0G6FJXGGRn5F1pS6JxzljGs5dWXja6py+ajLtPieWKp3AX+MVvzH72/qhXUKORLw2OihuORO9FcLlEEt9qG9h2xmu1HzOr4t9mp0zPdaSZQaH62sjBdO2c5wPNvZsgv9Gthoh77xHSRBVRT2xF9KGb3xclQoF5WTOxpd8u591ofzTfc4VYEy4EZX4kSaPHfPjUIHGUG2SJnYmUegclb5LYfELH0NS3FkHb/w79gmWlqlloL3b8Y7pB6ZOxStdlyqSrAJTM8e+FFbzjSEmww8RP7D9fVk4Ybn6subJ9mH+R47wi3e6x5bOpAzbHd9+VA2ZFMXJRV6Yu4GkbvEYG/veE4H6PikwHUSIwS91ljfqFCtZ8Npx44DlWccU+ga+dEaisnaMpPKb8ngWEMOKJvOWsEHkRjHweK6SiF6jrx8ZfKvvUt/1Dh66zWLPKp/wmIk51swvnXF9PQOB/JufZ8UJHSRgSnIxopFK4egA/Lub8ogyYdLcoj8Up5PqYirInicB76B6IhnKfFgfRnbjp1geBW8rk8lUfDr6i7Y+YWSQixVXVTAvfRAbuYNEBY0guhNC453vslWnylcazYL4brlRd7O6uuU0pyPP54oTx1/FZZV1Mhs8O/hunnCBdkZqtWGl3oQL93r7Ku0qNIB+AuPdPIb5iJBomBejhrpMLkvxdMcZSFUrZnNJDzXcXt47rIsP0Q+GsAwnjdlrFdUuDyMVnG69EH9TXT5GpqdTlt3kGRv13XGf1IRB+n2F1J1rp8kNNUDkezalZ6vUGw4PHeIfSouyTj7JSA3+ALvRr1DZt8w/R6SUEVs8S/fxBk/aLzd7uc2lk1f3vzQurxe1XcXz8RJG0+b76mIb1k5Xe/OXZx4e6yJTjFpuyLWF373zuD47VNd16kGN/O1ohiAaWDtv2FsgvWXq7Nm4Y0C/JjGzbjfJuynMingj9d/J4zXovioKsf22hQPFmblHzh0Ybk8BIh2aGjLIJV9E9clzD2DsNr9nkyn8X0UQCd6KiLjByO6SLvTk9HkVyd8hDInKKadzRP5pA9K130eY3Iyy4YYUIhNpbYB7kul/irxtuRx79H3PGwiqRxC49rB4ZQJcgrmfJjXYULHBniE4capEjwd9hgcIT+CoAv42SV/LQZfcBPc/Mo+/Les/zlMckDYg68vxbM1m002ua/ow8L0trqqnLquLCdqL5uljPvQ8Hm1vG2hXu3/kBFQh9HX2/DojmqeHXIFOwhAIAn5R5iJW7Z25WQumv9VNFcUcwFca4i6uY3i2VQseu3gitEgMbR7QRxoiPfImf6cf4doEih5NZHNc+Qs1HCXf3+C+Tdc5u4kAg3qKhGRa3kW6ffbkczTjcLAHqBr9epPr9uEccGSRc452D5zx/g2YcrnZ1YBN2/XRd9uw7DTMUyyzNqhxiuK4GPcHFgAwne2voq32pX7fSbyYYdrFGU8nBXjwBD3CyDGt1S9eKuF21+2OM520jWstrSO3Jo2gt/vskekEf6GUMikv0lsSE80AhtzSSP8jWIy+O7WQ1Q/vFljIhXC/hTD4VRmX+ltXmLxEgMWSAc+YNb91jPOtVEAHLHC9e0ciubAg7bzgisAWGt3emLU0OqO5V9rBAODWak9JhRzJ2XHYO/7GZC76O1gMe61+3pgrA6cJ8LNTzfocLmATCOEz158a7iISB+2+ynieRLWV8Nyb9+vuq7HiZH/TiixXSaHoIWYgmbthnUL0+fKO40FBhL4ww0/tIdMJKv0WDA2cVW8rXToH3Ez7vJ9+HPt8L7NLksgf/4pTgh/wgT+Kej+s7BNT9nJIKZ+MH5Q/bpAqm5uTTySJn/H9tLf5moI09D02YDXTtDDw6PGtWWU2zUAjIk2Z3baBHHwOhuaUJ1XAmKsnN7wMfTJmZ76PpYfgQKIeZRAgJQHFwckvvyEu82j1liK6L2t6oqZi40BWYZrsBSPjhXCIrcFfs0LFJHCSG7Be4RY2C1dZU5uX6Hb9rj21CvEJZ3q4+98U9J8/7h4VZZTF/dQWl6Vyl2w3jjF39tTi//O0bygZlYNDInYn4UQd+0RFQV7/eOS/qU+Jlf4WO3CWz8glOl0+YsLdfNBx0AG8OG6C+2rprv1CGKzaUH5zGQf7/gmmnElAV4Cdjdo7zNINiI/IDw1Nq6rx1g5dR5ESkdHHwvx0q5uA9r4DsLYzu2Gxl3+pViKiKi8+B0sFl2/QLcknDw6Ve7NMdphGZ8mGDJFNcLg7cFztZLc8+UX8mtVLHgbRWvzq7N7wR0ER2r8dUtorGH39ub7IqqcHD7ej9peMpNFvi7zaxD/ei+8ck2Fqs/bxEc+ZUt7P/CsqzKopkfXtxFBtCj1Z5sVnlWVCxYVtwNoF4lMtWVTSfhaF+T9jMmSYBZvhWGfQs8zmMILG2vS9/HD6NmnS40zfZhVHXxWimeCfFsxyunqmWeueBgmqx0MKvnFi673GeN6za4+3Ax4tqd0Dg/3GnVzl8qQ4SYCmk9yqX2J9eFh/pK7yLh/94p0zp6/6YmnT00+S/T7Bu7S8wFwDruuZ3/ISfz9KUV041rJPS46eEim9nidaT4auWY9S73lQSyQCKJOw7/9lHNWgjFvW851OyrfOdA8dzu/IjUNmYOD6FWhpVPeOCN9tZXw6XjsutmWsyz/5vME780chBEyfc/2seF/BZnirXsTJrUY64uMmBjlL0KqPnesinIgbRWEuiPllj5g7KjBXeRYZz26YJeGBC6C+ev83b7wxOS2dlJBhP1ZPc7yHBOUHgEUnQJunoC027pc/esX92CmjdPYOoxix+KWK5olbLNaf4/f5lwh3f1yxjf1XuXa0uMK1jmK4LafnfcYcdHFGyPfLiOlFmOgDy3ehpjuKrPwOmtN6+2z6/p2CCzZuegCSRDAGA6dPnbB+ORpeH6FTnjBL9AEzD0+YIMfWflbyfNlWhfKU4cbGV1QcBtpBCTyBtmGa+M9dhgk6zgZrUSobeDlQ34bUauplnhSS3Tydv827+OWCxMXcHMmaep8eG+1oWVb+8cRfGGyYay4/lAyR3zEMTK1e6K1ASrmNeg/X6QS3BegXV8liI/mYIRhVR9aDlngAypPXzy5xCa7xKA3Rg91ATVPF8D9mWd9Cy7XYEbR+UM+yGEt5KyIDN4iQOsz46rHBr1CVCv0gRWjfwrafa9dChCkdLukKbuYYR/ZnISgiew8c+jeM7t3M0g3tJOeB2UZF1PLhfGiuhgzd90ttjGuDH472wHHvw0VLqRKVa6zeZoqf33tIbOWa7TnBbS4Ggq/ljTso6J7rUDGegWpbfu8dqdUSNcdHvzeLKNlK+FLQUYmOEQb33QxCX5MZouYbz6i7AcFH8oFM46DE8YAztspcd2Zq+J3RFp9B2pvg+CYC1N3d35K98j6VMjzXZogHpx9cdtnGFiklu5BcASCDPDp9H2dEQSOv9bzgzOcfi6YSuSWeHDTfXWRPUtck6rnmBXxa4yXYz1GmkZxRkdrVFzvD2MU4TQoAG1Ed9fl9jbvQi8SW4fQrff+Gga/yKWmtS5HwxAs67OYA8HH8+/Ty0lm6uvG0bexX1zifN50T7DUF3BJji60aS68JfRxiky7JjmCSR6AHNDUYv1Tk3UeCyoSJbUuP9/rqQo6xO7CI/cxE+MvR0LAQPoLzT1av/CMMzWUBUCuse8Unx+SljJTIo/4NpoNltVuLDTR+zlmb+AmhCjHGk8LqcqkaSgTeJ11/Mi35z/a8NhqxGtXufhLVOzdtBQeKtuwOtRs0nzEP8stDSBkHkbYIyPKuPEswB44WtaicNIKxyMWRk3bQ2pcLpca102CPfweiHoPto56H06+elp33mo9A9/BDIlAlz+2mbPMe06L38vO0M9cZGXdvSuzpbhMHzelAUS0OXQdRTXZRp/ej16MYNG0VJWbMLGdN+2EEs7XpvdAd3OUiI1KlchrIiTxSD0y9xG5pbze+Ju94h9RBzN9j1RoenQwrhT5Ka9TB9O3wo1lzrh28aU1mF3uukTSvD05oAPSREBYX1uv8Ji0BdhwRhIzahzHun5jLHsdxBKyaSsEOWYzCrtr6QOBcHSYj/HxOcxRoNO0+90Xs2Yxw1u53RUPLayjYky4kVOaCO5TRlVtu1ynZb4/B/lwE/a7NA3vknQFAhgZPxvUWWMH4Zg6mH6eDLAAKKJiVQsxjO7iM/ykSiPVH8WKDj1geLtLebhGHo+XWnInBh9D/iiV2FpyDLRRB0Chy+tAgcAclmDn2McuLsBE+s5fVXcFqoiIv8eTBgUC8fPbUO2WUoc9a396q3poB69RXonRjO2KUnqRvKwHEPytF0xg8asmYvb4fmuSMGRd5GAyhti4HRJQFlxzrlT7eg4HYh1xzEoj8fVjoV9ogRXVGwm75HZdme6repMvt3WG0bNSdUELwsC4TPAD+gVUNJtpnuyjbz2evl2jdqylzTtljye1jR8fNeTayuNKtoMDhk2+KfTlTGYYHcHLpfiSZhNnPF3LzuLx8/xjXc7POayJpafnjCRQxqlHGF0RWD/XLWkyNEbll1Kmz1mzlu1HoCe+g/ul1lZ2zzPY81s7preJh0PNWcDm5/7xoHFXPQyMhYm6UHv0Gt1aXBXFzdReoSqw34gCbboV5UHcyckNKsMAQucEK0hZH7ivZzsKHAdx0U38yEw1sxqo9vIVmngbEs8OJzwGmj5L20pv5hMXV/xXD5SJbJXcAOo2RfMtM9aq01Pes3QA7nplqzHOy7nm+EYZ99f/69ELfc503rDarmDemFQ0O/RmgIK50Wxi6Tfsh984jAy4pCI3jij9GLNBzjG9bH/j/HEy249Pht0cC18vy46e30pos5owYcEmQn7LyMbr3JT68x3JBoVoj4I6mY/eTnuwf8Ik7frRXEXTLk/XLR570/8Q8PkF7Nl2w46R+Sg7VSS15i9aWEWgDerbaiQdx5bL33pvgCheoD4/uo2ol81j0YL2UOgiIQED65q+pESPWfCX97kf1dBw6c2wtd+1Hd9Ms0Z36Tti/aGrzKGNkp88eL5F5vPR2nrIkSNwAKA5/ud8aRfgjzkWqwQxk782OINf6D7kXG8OGxT+W1wgEG9WTGcdFRhLyTpfTlfxjb02WZos0XsFlQ/WdTlM7/UlvbYCKw3MWGeJ/Adpg0AHEJdH5GCDah3bX6g+JMRPAs+NI1ZvO471INr5C/Ldajy2XgEvk5Xg+dCInyFDvJjmTTZG3nQp38pyrVN2/W6tMLC8ugcxzcbhZTwo0CV6mfpa0gIsjD2NcR767s9FsDj4GTnsbvzikqz9hpRewkx71UMJ7UPylHHxpnNlBD/EXpfaxYrDoZW6NuzBl1dlb3nP2IBvt7j2z9Eu3wKHrinseuV/j2RckawVJJyIYoof72FHWVsE7/fw+epSaKb1YZZJZpqp2RXQUB6d4Tmq8lEJmLXmgdYhNhmVCssX1GxyIYIw7graBWQM+MLcLQWS+C2SXDCMaSBGIVJ1VszgUi9rxTumtXIJOaVtyx3OFHRoB+6hflQaYAjpM95HmmAn0rD+fWbUHM9NXBzoefpZAi+XliJcVG1eTTqlPrhl+tpje0hLGfryavzI3NU2OlQ9D+9X+/Wso2HWx86AYKbcrGHV43fRn19Bg4OkPrAfC4GWydDBACk3BWX56fYpyQeEt6RkrF21jUOjqFoTg6mL6e+l3V48Y38HkxHEG8EJNjKRSbz4YpurRqqAwj+bZYguHPWJMdSFtU7eeFEx8Zfb6/8h6a/8xD0KHtDG3Nqn8Feu3XiWKhU4TuEllScXWgeIj2osU+dxun6SwsYfT/gbr7i0kbo0nnjLSYGSokINeVkYUaaKbbFLw45E9uctD8bJ2DpLupkT2PhCdcOgKJM0jwgab2YpTDFcS7K3jzhNs9amV+KciNusgzSiSH/6FPwlpIwXORi6SI4QGPLc65Xn9cdGwyoafGaB2IdfCJDk99Ho3/iYH2Bs5CyQj30B1UXiA7dk3WHZt4XfR3cXSHUCHAlY/729WKPXl8AHFAdBvx1Roglyo7YtuIhAQDyevAjM4hcrvsvtKbxahSY5w0H1eYoI68nNMTVdh0knKsuSWslmogqS77BazPSAGjUDcONGCTvpwnhjgrc8aMi77GJF62oko/xvdGGjSkLbNoDnxSkrqZlBpE4SSsm9LKlN+OFNSY0qLeMzJszOOe2STkrVbANdYpE9gZdDGqUEIt0m8YpHe1YD+lB0Aym2/t2RI1JqfVyV3soWZgVFQ39DeRsXM2N3LHJxUs9tRDHvBSAUFikb/xZkJrhmiRUYsd+AcYzN1AnvSQL6xzTlT2eeScskeWwHkPitg7NiZhrT5a2cMGsyE6htg9hqx9X+mSFLqcHj0AqVD/CDqKffz8mE4QSwy9Tq8jMN4f4hxgzaFWzfOx7p/hgWz8OinydgxhIc7+msMOPtGL1Gm99HmkUyWtRXfvwEHeZ4RkbdX/U9UnDGvzKjs0JMJkdafjP7ufoJDlQTIcvugXfHBcFe/qGGWPXBPIyL4QvJxWs5ABfLG0Kvt/11W7fzZbuhI05Pr+sgnPo3lGwk4Qne46BKOAT+w3SOZtGPBlSiEA9euwhxwzlIcloMVBdv1Kctg4QMbewWsFo8q76GVt0dH8YlNJkTI+IpBIwy4rFszBX+RgADK4+h7lvEOY1kjF7vUBfbYaFj00Vho8aoybPJO4tHs0+edm7YVsF4JA2vgbEyp2QXw/cRtw24iQK5F1FyHau7Ck2cBXQ5DPoRuXNNLbGY1NPQDob/tnsi34jGkeFr8iv6DYkisDqhZuc9WDF/C3D5hDoJX2i+8U/ULHB91E3Iwcr+mbUwvf84uFcc+Yixidxtl5sVVQhi+vHb0vajXbjtWsRbBI019zhAhK81j+sKexHVIHhM+XWRESTjzFTh4kw7VDyoo32aVk9Q6rUNOAnE2vKgoHCOkPUdCEbjVRqHnVVVNE9pDIdMQLTkBlpujY0H+lH/cqIVD8DgHXur23HDOLrymV6X/FghRbFOjHG3DD3lKJ/7Y/1mv1cDw98EdF8xlIBkzo1jDuPZKit0T+cTxhCC8ZVKTAcAgdgApyeuBnwGBXnWv2AeBql4H2qZAntt2zbJJ1vil7+oqB+u8AOMG2rV20BsCa7RSTeSkosurEk41JJzx/VNufLpyFwzspTAB89OSzT/Onh/PZ2BA3DF33mCFcpyxFDBegnjdXgQB9ei83pCoVAd6GkS4exgYUNe/M6H8Fl4j630715WbfQek7YA8qcA7rw+OhmL0y6IyTEcKNeZIHSuHz1CB48xEnsgQM6w4gcXfkBRJM6IICtvsqAzUjF15zBwnTXhY7R4iKbaMJpNxzm4rPkKWx1nXtvO0zftFsuk76g11T2yXXIi5bRosNhqgoBbgoH+FVvZvzLUwnR82bfoPFDy4SwdPPHAOUSnIB+KM6VU30VvyS9/3vUmsdn1ETvixtBz3dlmfVDiKFtskUZ/veN1RtGTuw/8wzQI5ndkPWb3WQSu8zzGQjaV69wlMR32qjGSx/5CwLO4h7gSmFq/3GPy5BUeOHhYEPiKnG0O30GtUKgOtgtYZ3wgj/soX24ccSyXNOMIyD+xu2X7/mXHr/Xor9+o8xjbfgrhWcgb+wAI8iwBArHHaqKdssaiBi6iP56dRKu4p9B6aM5YSO5SDdRWKljOMhqw1eIaHEbQgV/2SNYQLLWNKj+zYUnOLZiDhtC71VLQsBc74xY+FUxaBk7m0m+YfrnsiJ7ZDGG9X/oaI8bWqtpGKIWRipjeX0TQUmpFuNHUrGswgCxGvlIUx2VZxMAdp62I0q3vui1SE/BiPxqPMeaIYslYzMLYVktlP5lIfuGgtzCdJ7M40qIkJ6RHoAvOWUAMLdJRcDPaB/GOfelk+IriCdlF4pVJpIqYMD701VPM1GI9Q8G6fQL0IkCF8Bk0x6yTkANEiKO60yR5/cdtaL5gXalqf3j0cbdVS2nJBu9pBC/pkLX95bTrkVJ/3rdpllqjchEIGZcUkAdz9+m2oZ7FBpyvQiHrFLRLDkDV+nowevA757KZEmFaPqpQ4QaL1+x+EMFxdWFTOUd/ys+zXLC9UNTXj62Unk+NuQsN/oCmAAXeccESrbhg4Qc7sQdt/VWNu5ea+suyqzg+PEhgTx71Y4cUhThklP3S/kKZ3Z2q8FetOFoN1JBHu/xgU00Q2NsV3kMN+VmFzeof2udb8r5OFYLbIRjDpGDess2IyF/mEikLJ1qsxHpr29tpgqx7B+IUw74nu/5YxADop+jtpH6kjHGVQ+Mugl6BkETpiXFQXW9HMzzu5WD9jgujSXOYqYfB2trdrJ9fedBDxAtTvXaLVJDE/YJz1YOigFsJSo1WO5TL3+Mu1Pl6DGRJJxTQoaMhEFpGLACPkL4CPYt4k1iWInQaYOW6K5PJj0X1zNFAOk6sMl9cRzIupWCAQUiStNy1bxiP2i1YDOk+GFKKFpc1LCzFInT+ryYLxWWpM8dwmz0ljbMK41KQJqJf3/IBbLxuuSIZoNCi6r9qhLLnsRZQP66tCKAM7Pr1fq/MtnPDWTO94MMs2K5St1kY9tndBvTBvNEJt3T+whY5SH2onDVajSidiEWnzEnhxdOuZIsFk3mNzvfDW3cCT0iXm9f6rNQq/7zqr1rGbaTl29G6TBYSdq2r9Y2VO3fipjFBmteyDyITD/jHwka5LiUF6GtTIwv9EUWlh7AvHjaPXcsZu7DW2cE+PQ/kdxV6Pe+Agl4BbbA0RdRxYKMb4BYSFhSZOsQQYjAievBxnGghCS5m9ynxEyFtevCxV7APMzFhjSWih1yrCOD8hfEMlkRtSAQUIVqihZEEqal95MEl1Ej+I0vjA4HaRZdaTAIhdRt/RzO4QwTSHPpF9DDfojvkIAN8RMN1NS/O79ubvA3B5OgGwaEvdqi1VfplB/x4S/npLniXmsvbGfdR4O5TxeTyoLbIjyr66ygIEigdOxUlrqk1RYOhdwJcLewSRI5xPu7lE6Rwbaj6Ox+GvABNE28iMg762whL3LDw4pRRP8owdp4sOz1nwCzQA6A02tPkYcmr0/xGUXf5PfC+o31AkhTcJfJY31+5iLHQ2TPMSKu7/wB/kjS1WhfmbwE9sG/zPUFPhIYnjIJ9q5fCol32Un4U7TEdmuhAkJyxWA6ER6sxMKTsN87J2x2XnMHcehEuWU3xSrASAR6ZPCyhEljsbGijdEzBVVEqVf+KVuxosawOye8wemNCukRIyff9sN5080GP4/GeJoff6TgOw/blW1zpbK2BVz8Zuh8Gpc34l9gAwTbdJUrOfiZvbFmwZLuSuAc9k/y6fdumuPPVbUTkw3bT0TLCRYwbGxeiTy9QH1D1SvI2rwExjL7H+2sQxEgeg1DJs5piX22SNuLfD/J2V3idT2ODrMMS3qHfTv1Mnhz8X3Eg03zF0TWFH5twBaPfOmtoSrcM5qMX1e7qbGCzvNVziQsB/sZEP5HKiJ0Pq0/1lmAIXkNt9NVrNYEf6HRYb2kLxqkxjFgc9AHtqHgMD5OCmwgVKz/CAp79LZDDPjTF7syKMcHIfd8uWFWC0iKeJrVz179xV1RIfaB+GhEZJ/pAXpHuMxpGVIiZtTX9/eDWJT3E4i+6/KcIbClJh2266Elbpg1xeEkHP49kXUpNvxwbgDA+TElbiO/CmENhby3fyqvOHuDhJNRcIRVaqo9D8o9aiFscEI8p8/G8VDFfELo5Ctz1Mri1EEpZKlIIujpoen7w1zCJ4KZS4oFOlLg5WdMEix0FjHH57bZpSLV5+IRKAeMbF2TRFlrEWXq0urjQg58xy5M6seuwSXQHpVAEcS46y/pgf3DkbcYPVjoQK6HI2iFeNMke74RW1buUC7sKHgR8SWPMN7RqYSngBZVhc5EPuC3TF9Ca6feqkqxgLkjwuwrjDbKQczSmw9s2bA3+IezndwG+U3Sct4yOxJJoovRnJegEs1qd1r51ITlIfNFqh0WA3RTH17OrQe8aMMRaN5zvlVugF9xLDPIJX/sZb0f/sxw/zymkx7Whv7b8FVnfjVknH6WfMAXjEu6BwMpJtVQvy42xula93q8ZV+ydh0B7RVy6Xsfv1xdsavhcefWPKSp7VMJ+F+NY9G6V/nUzUw505Zjp6PprHQGUOUgEBl1yzLjZH9VRPq0m2ehbddzOZgCTSsK0+FotmAaPoP2Y4uGDu07X/OroPB2pK3NW8+g0MMMhNRWyo6IGrzK/tpB9HEDdg3ePPPg71vjGbwpKfkuY64T1ls+xESI6Rdt3Be0RQeCHB5kuAJ+VkDPMlIneIlg3rlMJRusrAM/3t3EBD2LOaYU4VCYDlI3BYBxuW+93Dybn/NyHbN6It4SUv8sdhwDbLBqW3TQfvbPqQKS8TWrjiH1TmY82PS9fONa1+d84J8UTsbmRYBfbSg2j7FBxSB8z/a629se3vVXfniWHYIcaCLjWnljGYUwLdrCdIXbkPoukXQDEXbpaw+C/1F1K5gGNpqJBJIhlERc+32ybc1vBlqu+ZBZVjOl4Fr6a5LSOrrb+gPfT3gZaxkuSIGCJTviB68KhXcxWGpisOAclpJnHw3w3dEBTMlbZyXd1omz6WPNeg0qtwagH5v2jH1a/wBooDSZs09G2JuuF2/GOLB9vHxvWFfb1wHo71kjoOsctisK/2Tw2vzDQYfNRGMDJm5Dt+eDskOgTtjV6sknDv9Ny6FIJZKs2lu06YxIhCxIn147KkNtHig+4H58CtRqdgLGPzHfFV23mNpmkA7tD32I67ezW6m9ioSyzrI/RMxLdHQuj8sN3C/NFf9KzVN2jHWzMCqjbyoHqvd0JSjUfz9t8DEwix/RS2O9ROy9cWEOAjDy49+hBGw/26JnoCB4688uKqLRDVV7WoqnON4tWCTQfkAX3YX9qp/1vTFv81798qwZvMcYr62HJVb76+UXL3xBKruXt5rdZHcP8H5qSSvzHKako+j+Zkkpi/4aT/3VQKoxC/12TUj//35NSsz4F8zwcz7ukjZalSp4VKdfu+SEWfl7+jUN9R5v+DTPNzmr1/93r4H39b59/vWPPf/df7PWvN//8ZJYW2f96cZ/bGrY5yf5XD4T8z3dhztporfb/+AP/s+X811fNoXo7kvyP3UOJz79R/+4P+R8n3n7+jfj8x0v+c6P/usr/u0n/5cLYfx6e+/lPF1qjucjW/3Khv93+f57vf0MA8P+/CQD23yIA/1l94f+hp//bG/6fJee/e8OJ/64N/7+gf4OQz3/YdQxF/pf7/r4xs7l6Hiqb/xtkAf3vkYX/tIUYDP0fkgUc/z8kC8/beRjWf//xORpLbUiz9xP/Nw== diff --git a/docs/user/data_replication_architecture.png b/docs/user/data_replication_architecture.png new file mode 100644 index 0000000..81a3aa5 Binary files /dev/null and b/docs/user/data_replication_architecture.png differ diff --git a/docs/user/hac_migrate_data.png b/docs/user/hac_migrate_data.png new file mode 100644 index 0000000..2c310a2 Binary files /dev/null and b/docs/user/hac_migrate_data.png differ diff --git a/docs/user/hac_report.png b/docs/user/hac_report.png new file mode 100644 index 0000000..6833931 Binary files /dev/null and b/docs/user/hac_report.png differ diff --git a/docs/user/hac_schema_diff_exec.png b/docs/user/hac_schema_diff_exec.png new file mode 100644 index 0000000..d2b6195 Binary files /dev/null and b/docs/user/hac_schema_diff_exec.png differ diff --git a/docs/user/hac_schema_diff_prev.png b/docs/user/hac_schema_diff_prev.png new file mode 100644 index 0000000..2491267 Binary files /dev/null and b/docs/user/hac_schema_diff_prev.png differ diff --git a/docs/user/hac_validate_ds.png b/docs/user/hac_validate_ds.png new file mode 100644 index 0000000..3cd6b5f Binary files /dev/null and b/docs/user/hac_validate_ds.png differ diff --git a/docs/user/proxy_timeout.png b/docs/user/proxy_timeout.png new file mode 100644 index 0000000..fb15190 Binary files /dev/null and b/docs/user/proxy_timeout.png differ