Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

parfor_divide and parfor_exponent does not work on int32 and int64 if device does not support float64 #1147

Open
ZzEeKkAa opened this issue Sep 27, 2023 · 0 comments
Labels
bug Something isn't working

Comments

@ZzEeKkAa
Copy link
Contributor

If we try to run parfor_divide or parfor_exponent function on device that does not support f64 with i32 or i64 arguments it will output lowering error.
Occurred on gen12 igpu. Reproducible is inside tests.

================================================================================================================= FAILURES =================================================================================================================
________________________________________________________________________________________ test_built_in_operators1[parfor_exponent-device-int32-100] ________________________________________________________________________________________
self = <numba_dpex.core.parfors.parfor_lowerer.ParforLowerImpl object at 0x7ff400b90280>, lowerer = <numba.parfors.parfor_lowering.ParforLower object at 0x7ff3f00b2a30>
parfor = id=24[LoopNest(index_variable = parfor_index.1004, range = (0, a_size0.997, 1))]{25: <ir.Block at /home/yevhenii/Proje...umba-dpex/numba_dpex/tests/dpjit_tests/parfors/test_builtin_ops.py (35)>}Var(parfor_index.1004, test_builtin_ops.py:35)

    def _lower_parfor_as_kernel(self, lowerer, parfor):
        """Lowers a parfor node created by the dpjit compiler to a
        ``numba_dpex.kernel``.

        The general approach is as follows:

            - The code from the parfor's init block is lowered normally
              in the context of the current function.
            - The body of the parfor is transformed into a kernel function.
            - Dpctl runtime calls to submit the kernel are added.

        """
        # We copy the typemap here because for race condition variable we'll
        # update their type to array so they can be updated by the kernel.
        orig_typemap = lowerer.fndesc.typemap

        # replace original typemap with copy and restore the original at the
        # end.
        lowerer.fndesc.typemap = copy.copy(orig_typemap)

        if config.DEBUG_ARRAY_OPT:
            print("lowerer.fndesc", lowerer.fndesc, type(lowerer.fndesc))

        typemap = lowerer.fndesc.typemap
        varmap = lowerer.varmap

        loc = parfor.init_block.loc
        scope = parfor.init_block.scope

        # Lower the init block of the parfor.
        for instr in parfor.init_block.body:
            lowerer.lower_inst(instr)

        for racevar in parfor.races:
            if racevar not in varmap:
                rvtyp = typemap[racevar]
                rv = ir.Var(scope, racevar, loc)
                lowerer._alloca_var(rv.name, rvtyp)

        alias_map = {}
        arg_aliases = {}

        find_potential_aliases_parfor(
            parfor,
            parfor.params,
            typemap,
            lowerer.func_ir,
            alias_map,
            arg_aliases,
        )

        # run get_parfor_outputs() and get_parfor_reductions() before
        # kernel creation since Jumps are modified so CFG of loop_body
        # dict will become invalid
        if parfor.params is None:
            raise AssertionError

        parfor_output_arrays = get_parfor_outputs(parfor, parfor.params)

        # compile parfor body as a separate dpex kernel function
        flags = copy.copy(parfor.flags)
        flags.error_model = "numpy"

        # Can't get here unless
        # flags.set('auto_parallel', ParallelOptions(True))
        index_var_typ = typemap[parfor.loop_nests[0].index_variable.name]

        # index variables should have the same type, check rest of indices
        for loop_nest in parfor.loop_nests[1:]:
            if typemap[loop_nest.index_variable.name] != index_var_typ:
                raise AssertionError

        loop_ranges = [
            (loop_nest.start, loop_nest.stop, loop_nest.step)
            for loop_nest in parfor.loop_nests
        ]

        parfor_redvars, parfor_reddict = parfor.redvars, parfor.reddict

        nredvars = len(parfor_redvars)
        if nredvars > 0:
            self._reduction_codegen(
                parfor,
                typemap,
                nredvars,
                parfor_redvars,
                parfor_reddict,
                lowerer,
                parfor_output_arrays,
                loop_ranges,
                flags,
                alias_map,
            )
        else:
            try:
>               parfor_kernel = create_kernel_for_parfor(
                    lowerer,
                    parfor,
                    typemap,
                    flags,
                    loop_ranges,
                    bool(alias_map),
                    parfor.races,
                    parfor_output_arrays,
                )

numba_dpex/core/parfors/parfor_lowerer.py:480:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
numba_dpex/core/parfors/kernel_builder.py:440: in create_kernel_for_parfor
    sycl_kernel = _compile_kernel_parfor(
numba_dpex/core/parfors/kernel_builder.py:88: in _compile_kernel_parfor
    kernel_bundle = dpctl_prog.create_program_from_spirv(
dpctl/program/_program.pyx:274: in dpctl.program._program.create_program_from_spirv
    ???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

>   ???
E   dpctl.program._program.SyclProgramCompilationError

dpctl/program/_program.pyx:311: SyclProgramCompilationError

During handling of the above exception, another exception occurred:

fmt_ = 'lowering "{inst}" at {loc}', args = ()
kwargs = {'inst': id=24[LoopNest(index_variable = parfor_index.1004, range = (0, a_size0.997, 1))]{25: <ir.Block at /home/yevhe...lename=/home/yevhenii/Projects/numba-dpex/numba_dpex/tests/dpjit_tests/parfors/test_builtin_ops.py, line=35, col=None)}
errcls = functools.partial(<class 'numba.core.errors.LoweringError'>, loc=Loc(filename=/home/yevhenii/Projects/numba-dpex/numba_dpex/tests/dpjit_tests/parfors/test_builtin_ops.py, line=35, col=None))
loc = Loc(filename=/home/yevhenii/Projects/numba-dpex/numba_dpex/tests/dpjit_tests/parfors/test_builtin_ops.py, line=35, col=None)
newerr = LoweringError('Failed in dpex_dpjit_nopython mode pipeline (step: lowerer with support for parfor nodes)\n\x1b[1m\x1b[...ps.py:35)" at /home/yevhenii/Projects/numba-dpex/numba_dpex/tests/dpjit_tests/parfors/test_builtin_ops.py (35)\x1b[0m')
tb = None

    @contextlib.contextmanager
    def new_error_context(fmt_, *args, **kwargs):
        """
        A contextmanager that prepend contextual information to any exception
        raised within.  If the exception type is not an instance of NumbaError,
        it will be wrapped into a InternalError.   The exception class can be
        changed by providing a "errcls_" keyword argument with the exception
        constructor.

        The first argument is a message that describes the context.  It can be a
        format string.  If there are additional arguments, it will be used as
        ``fmt_.format(*args, **kwargs)`` to produce the final message string.
        """
        errcls = kwargs.pop('errcls_', InternalError)

        loc = kwargs.get('loc', None)
        if loc is not None and not loc.filename.startswith(_numba_path):
            loc_info.update(kwargs)

        try:
>           yield

../../.miniconda3/envs/rc2025/lib/python3.9/site-packages/numba/core/errors.py:823:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../.miniconda3/envs/rc2025/lib/python3.9/site-packages/numba/core/lowering.py:285: in lower_block
    self.lower_inst(inst)
../../.miniconda3/envs/rc2025/lib/python3.9/site-packages/numba/parfors/parfor_lowering.py:51: in lower_inst
    _lower_parfor_parallel(self, inst)
../../.miniconda3/envs/rc2025/lib/python3.9/site-packages/numba/parfors/parfor_lowering.py:60: in _lower_parfor_parallel
    return parfor.lowerer(lowerer, parfor)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

Discovered while working on #1143

@ZzEeKkAa ZzEeKkAa added the bug Something isn't working label Sep 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant