Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port all plugins to AnalysisPluginV0 #1047

Closed
31 tasks
maringuu opened this issue May 22, 2023 · 2 comments
Closed
31 tasks

Port all plugins to AnalysisPluginV0 #1047

maringuu opened this issue May 22, 2023 · 2 comments

Comments

@maringuu
Copy link
Collaborator

maringuu commented May 22, 2023

=====================

See #987 (review)

The minor part of the plugin version should be updated if the major version is 0.
If the major version is >=1 and the result schema is different than the old one semver requires us to bump the major version.

If the result schema is different we have to migrate the results in the database.

Here is my WIP script which was intended for the device_tree plugin before #1027 existed.
The alembic script did exactly what #1027 does but in the database.

diff --git a/src/storage/migration/__init__.py b/src/storage/migration/__init__.py
index d8d6b248..ceb39f04 100644
--- a/src/storage/migration/__init__.py
+++ b/src/storage/migration/__init__.py
@@ -9,6 +9,7 @@ from storage.db_connection import AdminConnection
 
 
 def db_needs_migration():
+    return False
     with OperateInDirectory(get_src_dir()):  # alembic must be executed from src for paths to line up
         with AdminConnection().engine.connect() as db:
             alembic_cfg_path = Path(__file__).parent.parent.parent / 'alembic.ini'
diff --git a/src/storage/migration/versions/a1c21b3422a7_plugin_device_tree_add_schema.py b/src/storage/migration/versions/a1c21b3422a7_plugin_device_tree_add_schema.py
new file mode 100644
index 00000000..c02f1348
--- /dev/null
+++ b/src/storage/migration/versions/a1c21b3422a7_plugin_device_tree_add_schema.py
@@ -0,0 +1,97 @@
+"""plugin_device_tree_add_schema
+
+Revision ID: a1c21b3422a7
+Revises: 221cfef47173
+Create Date: 2023-04-24 11:18:50.037043
+
+"""
+from alembic import op
+import sqlalchemy as sa
+from alembic import context
+
+from sqlalchemy.dialects.postgresql import ARRAY, CHAR, JSONB, VARCHAR
+from sqlalchemy.ext.mutable import MutableDict, MutableList
+
+
+# revision identifiers, used by Alembic.
+revision = 'a1c21b3422a7'
+down_revision = '221cfef47173'
+branch_labels = None
+depends_on = None
+
+# The version we downgrade to
+PLUGIN_OLD_VERSION = "1.0"
+# The version we upgrade to
+PLUGIN_NEW_VERSION = "2.0.0"
+# All versions (with possibly all different schemas) we can upgrade from
+PLUGIN_UPGRADALBE_VERSIONS = [PLUGIN_OLD_VERSION]
+PLUGIN = "device_tree"
+
+
+# References https://stackoverflow.com/questions/43153346/update-column-content-during-alembic-migration
+def upgrade() -> None:
+    connection = context.get_bind()
+
+    # Define the table
+    # XXX Should this maybe be dynamically generated by reflection?
+    analysis_table = sa.Table(
+        "analysis",
+        sa.MetaData(),
+        sa.Column("uid", VARCHAR(78)),
+        sa.Column("plugin", VARCHAR(64)),
+        sa.Column("plugin_version", VARCHAR(16)),
+        sa.Column("result", MutableDict.as_mutable(JSONB), default={}),
+    )
+
+    a = connection.execute(
+        sa.select([analysis_table.c.uid, analysis_table.c.result,],).where(
+            analysis_table.c.plugin == PLUGIN,
+            analysis_table.c.plugin_version.in_(PLUGIN_UPGRADALBE_VERSIONS),
+        ),
+    ).fetchall()
+
+    for uid, old_result in a:
+        new_result = {"result": old_result}
+        connection.execute(
+            analysis_table.update().where(
+                analysis_table.c.uid == uid,
+                analysis_table.c.plugin == PLUGIN,
+            ).values(
+                result=new_result,
+                plugin_version=PLUGIN_NEW_VERSION,
+            )
+        )
+
+
+def downgrade() -> None:
+    connection = context.get_bind()
+
+    # Define the table
+    # XXX Should this maybe be dynamically generated by reflection?
+    analysis_table = sa.Table(
+        "analysis",
+        sa.MetaData(),
+        sa.Column("uid", VARCHAR(78)),
+        sa.Column("plugin", VARCHAR(64)),
+        sa.Column("plugin_version", VARCHAR(16)),
+        sa.Column("result", MutableDict.as_mutable(JSONB), default={}),
+    )
+
+    a = connection.execute(
+        sa.select([analysis_table.c.uid, analysis_table.c.plugin, analysis_table.c.result,],).where(
+            analysis_table.c.plugin == "device_tree",
+            analysis_table.c.plugin_version == PLUGIN_NEW_VERSION,
+        ),
+    ).fetchall()
+
+    for uid, plugin, new_result in a:
+        old_result = new_result.pop("result")
+        connection.execute(
+            analysis_table.update().where(
+                analysis_table.c.uid == uid,
+                analysis_table.c.plugin == plugin,
+            ).values(
+                result=old_result,
+                plugin_version=PLUGIN_OLD_VERSION,
+            )
+        )
@maringuu
Copy link
Collaborator Author

To make our life easier it is a good Idea to start with the leafs of the dependency tree.
If we must change the schema we don't have to adapt any other plugins.
Of course when we then later upgrade some inner nodes we have to update the leafs but this should be easier since the leafs code will get cleaned up during the upgrade.

Another benefit is that we can detect outdated dependencies easily in the new Runner class.

@maringuu maringuu changed the title Port all plugins to analysis.PluginV0 Port all plugins to AnalysisPluginV0 Jul 13, 2023
@jstucke
Copy link
Collaborator

jstucke commented Dec 4, 2024

I will close this issue, since it is also tracked internally and this one is outdated

@jstucke jstucke closed this as completed Dec 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants