Multiclient scenario on separate machine. (Grabbing camera stream)

Hey guys.
I am trying to setup a scenario for experiment.

There is the following experiment explanation:

  • two cars should go through the junction in the direction, where car accidence may happen by intersection with each other. Cars are going by arrows (A,B). The camera, which should capture raw video stream is located at red dot. This camera should process video stream and make some predictions.

So, in a concept, 2 cars make a crash and outdoor camera (IoT) captures video stream and do some computations.

I need setup this experiment for my academic research.
So, it should be 2 machines:

  1. Windows machine, where first client opens Beamng.tech and generates cars with predefined paths for going.
  2. Unix machine, which connects to Windows machine, add camera at red dot and read the video stream and process it. Simpler – IoT Jetson Orin Nano, which should read video stream directly from simulator on the windows machine.

I have tried that setup and the following error I received:

910.94602|I|libbeamng.TechVE| Accepted new vehicle client: 192.168.1.102/51084
910.930|I|libbeamng.TechVE|Accepted new vehicle client: 192.168.1.102/51084

0] GE-Lua> 910.95996|E|engine::SharedMemoryManager::openSharedMemoryInternal| Shared Memory Manager - Failed to create shared memory for wnsm_279deede

910.944|E|engine::SharedMemoryManager::openSharedMemoryInternal|Shared Memory Manager - Failed to create shared memory for wnsm_279deede
0] GE-Lua> 
910.96012|E|engine::SensorManager::createCameraSensorWithSharedMemory| Sensor Manager - Failed to open shared memory for camera sensor colour data.
910.944|E|engine::SensorManager::createCameraSensorWithSharedMemory|Sensor Manager - Failed to open shared memory for camera sensor colour data.

0] GE-Lua> 910.96028|I|GELua.tech_techCore.TechGE| Opened camera sensor (with shared memory)

910.944|I|GELua.tech_techCore.TechGE|Opened camera sensor (with shared memory)
917.21911|E|Crashreport| *** A crash has happened in C++ code. To fix it, please provide a programmer with the crash report (ZIP file when available, or DMP file otherwise).

917.203|E|Crashreport|*** A crash has happened in C++ code. To fix it, please provide a programmer with the crash report (ZIP file when available, or DMP file otherwise).

0] GE-Lua> 917.21925|E|Crashreport| *** If the previous log lines are timestamped milliseconds away from this line, they might be related to the crash. Otherwise, the logs are likely unrelated.

917.203|E|Crashreport|*** If the previous log lines are timestamped milliseconds away from this line, they might be related to the crash. Otherwise, the logs are likely unrelated.

0] GE-Lua> 917.21932|E|Crashreport| *** Below is how the buggy C++ code got called from Lua side:

917.203|E|Crashreport|*** Below is how the buggy C++ code got called from Lua side:

0] GE-Lua> 917.21944|E|GELua.main_static_crashrpt|

=============== Stack Traceback >> START >>

(1) main chunk of line at line 1

--------------- << END <<

917.203|E|GELua.main_static_crashrpt|

=============== Stack Traceback >> START >>

(1) main chunk of line at line 1

--------------- << END <<

In general, I understand the problem (I suppose), Camera tries to use shared memory, but this shared memory may work only on the same host.
I have tried to read video stream from separate Win to Win machine and Unix → Win. The issue still the same and also dig deeper into beamngpy library and saw, that shared memory is own implementation for

from multiprocessing.shared_memory import SharedMemory

As a conclusion, I suppose I am not able to get video stream from simulator during connecting from another host.

Do you guys have any suggestions how this experiment may be setup?
I am also thinking about few options:

  1. Run everything on the same Linux host (predictions, Beamng.tech, cars setup, etc)
  2. Use Redis as a “shared memory” between Linux host (because it is ridiculous to install redis on Windows machine) and Jetson Orin Nano.

With Redis approach I am concerned, because I have already tried approach with web-socket. It gives only 5 FPS, which is to low for real-time application.

And running everything on the same Linux machine may have unpredicted issues too.

Hello byzkrovnyi,

As you’ve rightly noted, shared memory can’t work across two separate clients unless there’s physical shared memory access between them. To resolve this, you’ll need to disable shared memory for the camera sensor.

In BeamNGpy, you can set the is_using_shared_memory parameter to False to avoid using shared memory for transferring camera sensor data. Here’s the relevant documentation that might help: BeamNGpy Camera Documentation.

Let us know if you need further assistance with configuring this setup!

Hi @asaeed
Thanks for quick response,
Yes you are right, that I can disable is_using_shared_memory, BUT
I dig deeper in the code and found the following thing:

camera.py, stream_raw method

...
raw_readings = {}
        if self.colour_shmem:
            raw_readings["colour"] = self.colour_shmem.read(self.shmem_size)
        if self.annotation_shmem:
            raw_readings["annotation"] = self.annotation_shmem.read(self.shmem_size)
        if self.depth_shmem:
            raw_readings["depth"] = self.depth_shmem.read(self.shmem_size)
...

which means, only If self.colour_shmem is True, then colour bytes or image will be returned.

The self.colour_shmem is filled in __init__ method of camera.py

...
 if is_using_shared_memory:
            self.logger.debug("Camera - Initializing shared memory.")
            self.shmem_size = resolution[0] * resolution[1] * 4
            if is_render_colours:
                self.colour_shmem = BNGSharedMemory(self.shmem_size)
                self.logger.debug(
                    "Camera - Bound shared memory for colour: "
                    f"{self.colour_shmem.name}"
                )
...

So, the conclusion for everything listed above.

In case if is_using_shared_memory parameter set to False, then self.colour_shmem will not be initialized, and then, during stream_raw method invocation, image or bytes will not be returned. The None will be there as a value for ‘colour’ key.

Maybe you know how to return Image or Bytes without using shared_memory?
Also, I may create a pull request for that, if you or developers team describe how it can be done in a few words without usage of shared_memory?

Hello byzkrovnyi,

To retrieve image data without using shared memory in BeamNGpy’s Camera, avoid the stream_raw() method if is_using_shared_memory is set to False, as stream_raw() requires shared memory (is_streaming=True). Without shared memory, colour_shmem, annotation_shmem, and depth_shmem won’t initialize, resulting in None for image data.

Instead, use the poll() method, which requests data directly from the simulator without needing shared memory. Here’s a simple example:

image_data = camera.poll()

This approach bypasses the shared memory dependency, making it suitable when is_using_shared_memory=False.

For more information on stream_raw(), refer to the documentation.

1 Like

Great, this works perfectly!
Unfortunately, the FPS is so low.
I changed poll to poll_raw and set the update_priority=1, additionally changed requested_update_time=0.00000001.

Do you have any suggestions how to improve performance?

The following setup I use:

camera = Camera(
        "camera1",
        client_b,
        av_a,
        update_priority=1,
        requested_update_time=0.00000001,
        pos=(0, 0, 3),
        dir=(-0.006556159351021051, 0.9984973669052124, -0.0544055812060833),
        up=(-0.0066, 0.9569, 0.2904),
        field_of_view_y=38,
        near_far_planes=(0.1, 500),
        resolution=(640, 640),
        is_streaming=True,
        is_render_colours=True,
        is_render_annotations=False,
        is_render_depth=False
    )
    time.sleep(5)
    while True:
        decoded = np.frombuffer(camera.poll_raw()["colour"], dtype=np.uint8)
        decoded = decoded.reshape(1024, 1024, 4).copy()
        img = cudaFromNumpy(decoded)
        detections = net.Detect(img, overlay="box,labels,conf")
        for detection in detections:
            print(f"Detection {detection}")
        output.Render(img)

Also, I have another one question. How to determine the exact camera’s up and direction values?
I used Camera Transform in World Editor to get cameras characteristics.
The support team said me the following:

Getting Camera Direction and Up Parameters in the World Editor
In the World Editor, the "Camera Transform" tool displays a 7-element array:
- The first three elements represent the camera's position (X, Y, Z).
- The last four elements are a quaternion (X, Y, Z, W) that defines the camera's orientation. 
This quaternion can be converted into direction and up vectors to align the camera’s view accurately.

But they did not reference how to exact convert the quaternion into direction and up vectors. Maybe you might suggest me the way how to do it?

Moved question about camera positioning into separate topic:

Hi byzkrovnyi,

If you’re experiencing performance issues, here are two suggestions that might help:

  1. Camera API Instead of Camera Sensor: Consider using the Camera API, which utilizes the main window renderer, instead of the camera sensor. The camera sensor acts as an extra renderer window, which can negatively impact performance. The Camera API leverages the main rendering pipeline and could provide a significant FPS improvement. You can check the Camera API in BeamNGpy here:
    BeamNGpy Camera API Documentation

  2. Run on Charger Mode: If you’re using a laptop, make sure it’s plugged in and running on charger mode rather than battery mode. Laptops often reduce CPU and GPU performance to save power when on battery, which can severely affect the simulator’s performance. Running on charger mode will allow your laptop to utilize its full processing power, resulting in better FPS and overall performance.

Let me know if these suggestions help or if you need further assistance!

Hi @asaeed Thanks for response.

But I am not sure, that I am able to grab image or bytes from CameraAPI.
Seems like CameraAPI is just for camera controlling

Hi @byzkrovnyi,

That’s correct, my last response regarding the Camera API was primarily about controlling the renderer window, not polling image data. I’m glad you brought that up.