Does Vision produce slightly different results when calling from macOS app and iOS app for the same image

I'm using Vision framework for text recognition and detecting rectangles in the image. For that, I'm using VNRecognizeText & VNDetectRectangles features of the Vision. In macOS and iOS results, I found slight difference in the boundingBox coordinates of the text and the rectangles detected for the same image. Is this expected? Can we do anything to make the results identical? Also, on macOS, when I'm using same features of Vision from python (using pyobjc-framework-Vision package), there also i'm getting slightly different results.