Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump llvm to llvm/llvm-project@c06d0ff806b7 #19903

Merged
merged 11 commits into from
Feb 6, 2025
Original file line number Diff line number Diff line change
Expand Up @@ -964,14 +964,6 @@ DiagnosedSilenceableFailure transform_dialect::IREEBufferizeOp::apply(
return listener.checkAndResetError();
}

// 3. Post-bufferization passes are fine.
PassManager pm(getContext());
addIREEPostBufferizationPasses(pm);
if (failed(pm.run(target))) {
return mlir::emitDefiniteFailure(target)
<< "post-bufferization passes failed";
}

results.set(getOperation()->getOpResult(0), {target});
return listener.checkAndResetError();
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,10 @@ module attributes {transform.with_named_sequence} {
transform.print %8 : !transform.any_op
transform.iree.eliminate_empty_tensors %8 : (!transform.any_op) -> ()
%9 = transform.iree.bufferize %8 : (!transform.any_op) -> !transform.any_op
// %9 = transform.structured.match ops{["func.func"]} in %8 : (!transform.any_op) -> !transform.any_op
%10 = transform.structured.match ops{["func.func"]} in %9 : (!transform.any_op) -> !transform.op<"func.func">
transform.apply_patterns to %10 {
transform.apply_patterns.canonicalization
} : !transform.op<"func.func">
transform.yield
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,14 @@ module attributes { transform.with_named_sequence } {
%tensor_func = transform.structured.match ops{["func.func"]} in %variant_op : (!transform.any_op) -> !transform.any_op
transform.iree.eliminate_empty_tensors %tensor_func : (!transform.any_op) -> ()
%memref_func = transform.iree.bufferize %tensor_func : (!transform.any_op) -> !transform.any_op
%func_op_bufferized = transform.structured.match ops{["func.func"]} in %memref_func : (!transform.any_op) -> !transform.op<"func.func">
transform.apply_patterns to %func_op_bufferized {
transform.apply_patterns.canonicalization
} : !transform.op<"func.func">

// Annotate the exported function as already translated.
%none = transform.param.constant #iree_codegen.translation_info<pipeline = None> -> !transform.any_param
transform.annotate %memref_func "translation_info" = %none : !transform.any_op, !transform.any_param
transform.annotate %func_op_bufferized "translation_info" = %none : !transform.op<"func.func">, !transform.any_param
transform.yield
}
} // module
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,11 @@
#include "llvm/Support/Path.h"
#include "llvm/Support/ToolOutputFile.h"
#include "mlir/Dialect/SCF/IR/SCF.h"
#include "mlir/IR/Dominance.h"
#include "mlir/Pass/Pass.h"
#include "mlir/Pass/PassManager.h"
#include "mlir/Support/FileUtilities.h"
#include "mlir/Transforms/CSE.h"
#include "mlir/Transforms/Passes.h"

// NOTE: redundant bindings will result in unique buffer locations during the
Expand Down Expand Up @@ -447,14 +449,9 @@ buildBenchmarkModule(IREE::HAL::ExecutableOp sourceExecutableOp,
if (!hasAnyBenchmarks)
return {};

// Run CSE and the canonicalizer to pretty up the output.
PassManager passManager(moduleOp->getContext());
passManager.addPass(mlir::createCanonicalizerPass());
passManager.addPass(mlir::createCSEPass());
if (failed(passManager.run(*moduleOp))) {
moduleOp->emitError("failed to run canonicalizer; malformed output");
return {};
}
IRRewriter rewriter(moduleOp->getContext());
DominanceInfo domInfo;
mlir::eliminateCommonSubExpressions(rewriter, domInfo, moduleOp.get());

return moduleOp;
}
Expand Down
2 changes: 1 addition & 1 deletion third_party/llvm-project
Submodule llvm-project updated 2870 files
2 changes: 1 addition & 1 deletion third_party/torch-mlir
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something in here broke the MSVC/Windows build: https://github.com/iree-org/iree/actions/runs/13196900742/job/36840094404#step:9:7647

FAILED: compiler/plugins/input/Torch/torch-mlir/CMakeFiles/iree_compiler_plugins_input_Torch_torch-mlir_ConversionPasses.objects.dir/__/__/__/__/__/third_party/torch-mlir/lib/Conversion/TorchToTosa/TosaLegalizeCommon.cpp.obj 
C:\ProgramData\chocolatey\bin\ccache "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.42.34433\bin\Hostx64\x64\cl.exe"  /nologo /TP  -IC:\home\runner\_work\iree\iree -IC:\mnt\azure\b\092805 -IC:\home\runner\_work\iree\iree\third_party\torch-mlir\include -IC:\home\runner\_work\iree\iree\compiler\plugins\input\Torch -IC:\mnt\azure\b\092805\compiler\plugins\input\Torch -IC:\home\runner\_work\iree\iree\third_party\llvm-project\llvm\include -IC:\mnt\azure\b\092805\llvm-project\include -IC:\home\runner\_work\iree\iree\third_party\llvm-project\mlir\include -IC:\mnt\azure\b\092805\llvm-project\tools\mlir\include -IC:\home\runner\_work\iree\iree\third_party\llvm-project\lld\include -IC:\mnt\azure\b\092805\llvm-project\tools\lld\include /DWIN32 /D_WINDOWS /EHsc /Z7 /O2 /Ob1  -std:c++17 -MD /wd4996 /Zc:preprocessor /DWIN32_LEAN_AND_MEAN /DNOMINMAX /D_USE_MATH_DEFINES /D_CRT_SECURE_NO_WARNINGS /D_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES /D_SILENCE_NONFLOATING_COMPLEX_DEPRECATION_WARNING /GR- /bigobj /W3 /wd4200 /wd4018 /wd4146 /wd4244 /wd4267 /wd4005 /wd4065 /wd4141 /wd4624 /wd4576 /wd5105 /showIncludes /Focompiler\plugins\input\Torch\torch-mlir\CMakeFiles\iree_compiler_plugins_input_Torch_torch-mlir_ConversionPasses.objects.dir\__\__\__\__\__\third_party\torch-mlir\lib\Conversion\TorchToTosa\TosaLegalizeCommon.cpp.obj /Fdcompiler\plugins\input\Torch\torch-mlir\CMakeFiles\iree_compiler_plugins_input_Torch_torch-mlir_ConversionPasses.objects.dir\ /FS -c C:\home\runner\_work\iree\iree\third_party\torch-mlir\lib\Conversion\TorchToTosa\TosaLegalizeCommon.cpp
C:\home\runner\_work\iree\iree\third_party\llvm-project\mlir\include\mlir/Dialect/Tosa/Utils/ConversionUtils.h(148): error C2872: 'OpTrait': ambiguous symbol
C:\home\runner\_work\iree\iree\third_party\llvm-project\mlir\include\mlir/Dialect/Tosa/IR/TosaOps.h(49): note: could be 'mlir::OpTrait'
C:\home\runner\_work\iree\iree\third_party\torch-mlir\include\torch-mlir/Dialect/Torch/IR/TorchTraits.h(23): note: or       'mlir::torch::Torch::OpTrait'
C:\home\runner\_work\iree\iree\third_party\llvm-project\mlir\include\mlir/Dialect/Tosa/Utils/ConversionUtils.h(148): note: the template instantiation context (the oldest one first) is
C:\home\runner\_work\iree\iree\third_party\torch-mlir\lib\Conversion\TorchToTosa\TosaLegalizeCommon.cpp(126): note: see reference to function template instantiation 'TosaOp mlir::tosa::CreateOpAndInfer<mlir::tosa::MulOp,mlir::Value&,mlir::Value&,mlir::Value&>(mlir::PatternRewriter &,mlir::Location,mlir::Type,mlir::Value &,mlir::Value &,mlir::Value &)' being compiled
        with
        [
            TosaOp=mlir::tosa::MulOp
        ]
C:\home\runner\_work\iree\iree\third_party\torch-mlir\include\torch-mlir/Conversion/TorchToTosa/TosaLegalizeUtils.h(83): note: see reference to function template instantiation 'TosaOp mlir::tosa::CreateOpAndInfer<TosaOp,mlir::Value&,mlir::Value&,mlir::Value&>(mlir::ImplicitLocOpBuilder &,mlir::Type,mlir::Value &,mlir::Value &,mlir::Value &)' being compiled
        with
        [
            TosaOp=mlir::tosa::MulOp
        ]
C:\home\runner\_work\iree\iree\third_party\torch-mlir\include\torch-mlir/Conversion/TorchToTosa/TosaLegalizeUtils.h(76): note: see reference to function template instantiation 'TosaOp mlir::tosa::CreateOpAndInferShape<TosaOp,mlir::Value&,mlir::Value&,mlir::Value&>(mlir::ImplicitLocOpBuilder &,mlir::Type,mlir::Value &,mlir::Value &,mlir::Value &)' being compiled
        with
        [
            TosaOp=mlir::tosa::MulOp
        ]

I saw some recent changes to https://github.com/llvm/torch-mlir/blame/main/lib/Conversion/TorchToTosa/TosaLegalizeCommon.cpp and https://github.com/llvm/llvm-project/tree/main/mlir/include/mlir/Dialect/Tosa/Utils but I'm not sure what specifically

I'm also not sure why we build TorchToTosa at all in IREE, since we don't use it...?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This patch fixes for me locally... still not sure what triggered that failure though

D:\dev\projects\iree\third_party\llvm-project (HEAD detached at a1984ec)
λ git diff
diff --git a/mlir/include/mlir/Dialect/Tosa/Utils/ConversionUtils.h b/mlir/include/mlir/Dialect/Tosa/Utils/ConversionUtils.h
index 78a882885543..baee63308ee2 100644
--- a/mlir/include/mlir/Dialect/Tosa/Utils/ConversionUtils.h
+++ b/mlir/include/mlir/Dialect/Tosa/Utils/ConversionUtils.h
@@ -145,7 +145,7 @@ TosaOp createOpAndInferShape(ImplicitLocOpBuilder &builder, Type resultTy,
 template <typename TosaOp, typename... Args>
 TosaOp CreateOpAndInferShape(ImplicitLocOpBuilder &builder, Type resultTy,
                              Args &&...args) {
-  if (TosaOp::template hasTrait<OpTrait::SameOperandsAndResultRank>()) {
+  if (TosaOp::template hasTrait<::mlir::OpTrait::SameOperandsAndResultRank>()) {
     // op requires same ranks for tensor operands
     if constexpr (sizeof...(Args) == 2) {
       auto argX = std::get<0>(std::tie(args...));

The namespace code in torch-mlir is sketchy: https://github.com/llvm/torch-mlir/blob/52299e6a659e68879b2c656ffd82ae2a9d28afd8/include/torch-mlir/Dialect/Torch/IR/TorchTraits.h#L20-L23

namespace mlir {
namespace torch {
namespace Torch {
namespace OpTrait {

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also not sure why we build TorchToTosa at all in IREE, since we don't use it...?

Ah, TOSA is a non-optional dep of torch-mlir. We could carve it out with a TORCH_MLIR_ENABLE_TOSA like there already is for TORCH_MLIR_ENABLE_STABLEHLO 🤔

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sent my fix/workaround upstream: llvm/llvm-project#126286

Loading