Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible supports for using "GPU" instead of "CPU" on arm macs(m1/m2/etc) now? #17

Open
eawlot3000 opened this issue Feb 26, 2024 · 4 comments
Labels
question Further information is requested

Comments

@eawlot3000
Copy link

#!/usr/bin/env python3
import onnxruntime as rt

import numpy
from onnxruntime.datasets import get_example

print(rt.get_device())
print(rt.__version__)
print('========')


def test():
    print("running simple inference test...")
    example1 = get_example("sigmoid.onnx")
    sess = rt.InferenceSession(example1, providers=rt.get_available_providers())

    input_name = sess.get_inputs()[0].name
    print("input name", input_name)
    input_shape = sess.get_inputs()[0].shape
    print("input shape", input_shape)
    input_type = sess.get_inputs()[0].type
    print("input type", input_type)

    output_name = sess.get_outputs()[0].name
    print("output name", output_name)
    output_shape = sess.get_outputs()[0].shape
    print("output shape", output_shape)
    output_type = sess.get_outputs()[0].type
    print("output type", output_type)

    import numpy.random

    x = numpy.random.random((3, 4, 5))
    x = x.astype(numpy.float32)
    res = sess.run([output_name], {input_name: x})
    print(res)

def main():
    runtimes = ", ".join(rt.get_available_providers())
    print()
    print(f"Available Providers: {runtimes}")
    print()

    test()

if __name__=="__main__":
    main()

output

CPU
1.16.3
========

Available Providers: CoreMLExecutionProvider, CPUExecutionProvider

running simple inference test...
input name x
input shape [3, 4, 5]
input type tensor(float)
output name y
output shape [3, 4, 5]
output type tensor(float)
[array([[[0.57910156, 0.61865234, 0.5834961 , 0.7050781 , 0.6503906 ],
        [0.64160156, 0.63183594, 0.6098633 , 0.73046875, 0.7211914 ],
        [0.71875   , 0.63964844, 0.5595703 , 0.6591797 , 0.5629883 ],
        [0.5786133 , 0.71435547, 0.56591797, 0.51904297, 0.62353516]],

       [[0.7265625 , 0.5600586 , 0.7290039 , 0.68115234, 0.7109375 ],
        [0.6035156 , 0.61376953, 0.69091797, 0.61279297, 0.55810547],
        [0.52685547, 0.56103516, 0.69921875, 0.5004883 , 0.6533203 ],
        [0.7182617 , 0.66308594, 0.7163086 , 0.58984375, 0.71728516]],

       [[0.546875  , 0.6982422 , 0.58935547, 0.73095703, 0.55371094],
        [0.609375  , 0.6928711 , 0.5371094 , 0.68847656, 0.6147461 ],
        [0.5859375 , 0.72216797, 0.625     , 0.52246094, 0.59716797],
        [0.6777344 , 0.59033203, 0.64941406, 0.6425781 , 0.71191406]]],
      dtype=float32)]

[Process exited 0]

currently only CPU supported.
thanks

@cansik
Copy link
Owner

cansik commented Feb 26, 2024

What exactly is the issue you are facing? You won't have a GPU provider for MacOSX, only CoreMLExecutionProvider which is available in your example output.

@cansik cansik added the question Further information is requested label Feb 26, 2024
@angewandte-codinglab
Copy link

hi! as this supports coreML, would that not mean, that apple silicon gpus and neural engine can also be used?

@cansik
Copy link
Owner

cansik commented Apr 13, 2024

It converts the models into the CoreML format and executes them, i.e. the available accelerators are used.

@Vitorinox
Copy link

hi! as this supports coreML, would that not mean, that apple silicon gpus and neural engine can also be used?

on mi machine is using NPUs to do some work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants