8.6 KiB
uid
| uid |
|---|
| arfoundation-image-capture |
Image capture
Your app can access images captured by the device camera if the following conditions are met:
- Device platform supports camera feature
- User has accepted any required camera permissions
- Camera feature is enabled, for example ARCameraManager is active and enabled
The method you choose to access device camera images depends on how you intend to process the image. There are tradeoffs to either a GPU-based or a CPU-based approach.
Understand GPU vs CPU
There are two ways to access device camera images:
- GPU: GPU offers best performance if you will simply render the image or process it with a shader.
- CPU: Use CPU if you will access the image's pixel data in a C# script. This is more resource-intensive, but allows you to perform operations such as save the image to a file or pass it to a computer vision system.
Access images via GPU
Camera Textures are usually external Textures that do not last beyond a frame boundary. You can copy the Camera image to a Render Texture to persist it or process it further.
The following code sets up a command buffer that performs a GPU copy or "blit" to a Render Texture of your choice immediately. The code clears the the render texture before the copy by calling ClearRenderTarget.
[!code-csGPU_Blit]
Access images via CPU
To access the device camera image on the CPU, first call ARCameraManager.TryAcquireLatestCpuImage to obtain an XRCpuImage.
Note
On iOS 16 or newer, you can also use ARKitCameraSubsystem.TryAcquireHighResolutionCpuImage. See High resolution CPU image to learn more.
XRCpuImage is a struct that represents a native pixel array. When your app no longer needs this resource, you must call XRCpuImage.Dispose to release the associated memory back to the AR platform. You should call Dispose as soon as possible, as failure to Dispose too many XRCpuImage instances can cause the AR platform to run out of memory and prevent you from capturing new camera images.
Once you have an XRCpuImage, you can convert it to a Texture2D or access the raw image data directly:
- Synchronous conversion to a grayscale or color TextureFormat
- Asynchronous conversion to grayscale or color
- Raw image planes
Synchronous conversion
To synchronously convert an XRCpuImage to a grayscale or color format, call XRCpuImage.Convert:
public void Convert(
XRCpuImage.ConversionParams conversionParams,
IntPtr destinationBuffer,
int bufferLength)
This method converts the XRCpuImage to the TextureFormat specified by the ConversionParams, then writes the data to destinationBuffer.
Grayscale image conversions such as TextureFormat.Alpha8 and TextureFormat.R8 are typically very fast, while color conversions require more CPU-intensive computations.
Use XRCpuImage.GetConvertedDataSize if needed to get the required size for destinationBuffer.
Example
The example code below executes the following steps:
- Acquire an
XRCpuImage - Synchronously convert to an
RGBA32color format - Apply the converted pixel data to a texture
[!code-csSynchronous_Conversion]
The AR Foundation Samples GitHub repository contains a similar example that you can run on your device.
Asynchronous conversion
If you do not need to access the converted image immediately, you can convert it asynchronously.
Asynchronous conversion has three steps:
-
Call XRCpuImage.ConvertAsync(XRCpuImage.ConversionParams).
ConvertAsyncreturns an XRCpuImage.AsyncConversion object to track the conversion status.Note
You can dispose
XRCpuImagebefore asynchronous conversion completes. The data contained by theXRCpuImage.AsyncConversionis not bound to theXRCpuImage. -
Await the
AsyncConversionstatus until conversion is done:while (!conversion.status.IsDone()) yield return null;After conversion is done, read the status value to determine whether conversion succeeded.
AsyncConversionStatus.Readyindicates a successful conversion. -
If successful, call AsyncConversion.GetData<T> to retrieve the converted data.
GetData<T>returns aNativeArray<T>that is a view into the native pixel array. You don't need to dispose thisNativeArray, asAsyncConversion.Disposewill dispose it.Important
You must explicitly dispose
XRCpuImage.AsyncConversion. Failing to dispose anAsyncConversionwill leak memory until theXRCameraSubsystemis destroyed.
Asynchronous requests typically complete within one frame, but can take longer if you queue multiple requests at once. Requests are processed in the order they are received, and there is no limit on the number of requests.
Examples
[!code-csAsynchronous_Conversion]
There is also an overload of ConvertAsync that accepts a delegate and does not return an XRCpuImage.AsyncConversion, as shown in the example below:
[!code-csAsynchronous_Conversion_With_Delegate]
If you need the data to persist beyond the lifetime of your delegate, make a copy. See NativeArray<T>.CopyFrom.
Raw image planes
Note
An image "plane", in this context, refers to a channel used in the video format. It is not a planar surface and not related to
ARPlane.
Most video formats use a YUV encoding variant, where Y is the luminance plane, and the UV plane(s) contain chromaticity information. U and V can be interleaved or separate planes, and there might be additional padding per pixel or per row.
If you need access to the raw, platform-specific YUV data, you can get each image "plane" using the XRCpuImage.GetPlane method as shown in the example below:
if (!cameraManager.TryAcquireLatestCpuImage(out XRCpuImage image))
return;
// Consider each image plane
for (int planeIndex = 0; planeIndex < image.planeCount; ++planeIndex)
{
// Log information about the image plane
var plane = image.GetPlane(planeIndex);
Debug.LogFormat("Plane {0}:\n\tsize: {1}\n\trowStride: {2}\n\tpixelStride: {3}",
planeIndex, plane.data.Length, plane.rowStride, plane.pixelStride);
// Do something with the data
MyComputerVisionAlgorithm(plane.data);
}
// Dispose the XRCpuImage to avoid resource leaks
image.Dispose();
XRCpuImage.Plane provides direct access to a native memory buffer via NativeArray<byte>. This represents a view into the native memory — you don't need to dispose the NativeArray. You should consider this memory read-only, and its data is valid until the XRCpuImage is disposed.