Notes on multiple stream (e.g. CSI-2 virtual channel and data type) support in V4L2 Formats and Multiplexed Pads * A mux sub-device may reasonably affect the format as well * Think of a device with two CSI-2 receivers and a single transmitter * One line of each input is transmitted alternatingly, resulting into an image twice as high as the original * If we need to inform userspace or driver code that a pad carries multiplexed streams, we can add a pad flag. struct v4l2_mbus_frame_desc struct v4l2_mbus_frame_desc_entry - flags : That information is useful, but likely to device after the demultiplexer, not to the demultiplexer itself. The frame descriptors are not propagated to downstream subdevs after the demultiplexer, making the API complicated to use. As no driver currently uses those flags, we should remove them for now. - bpp, uncompressed_bpp: could be obtained from the media bus format - pixelcode: the media bus pixel code [1] - stream_group: TBD, see "Start Streaming" section - size.two_dim.start_line, size.two_dim.start_pixel: originally for CCP2, can be removed for now - size.two_dim.width, size.two_dim.height: frame width and height - size.length: Valid for blob data only. For compressed blobs (e.g. JPEG), that should be the max length, not the actual length. Do we have any use case for fixed-size blobs ? Embedded data is not such a use case as it's transferred in 2D mode similarly to images. JPEG images are similar in that regard, as the CSI2 maximum long packet size is 65535 words, requiring JPEG blobs to be split in multiple long packets, and thus in "lines". - bus.csi2.channel, bus.csi2.data_type: Needed in the kernel. Most probably no use case for exposing this to userspace, but to be confirmed. Stream Filtering The .s_routing() operation is used to configure the CSI-2 receiver/demuxer. VC and DT are not exposed to userspace, but transferred internally within the kernel through frame descriptors. Only stream IDs are visible from userspace. We have use cases where we want to capture multiple DTs in one VC (either all, or a subset of them if the hardware supports that) in the same buffer through the same DMA engine. The DTs need to be output on a single non-multiplexed source pad. This can still be configured using .s_routing() by enabling routes from multiple input streams to the same source pad. Start Streaming * We need to move streaming control from subdevs to routes inside subdevs. A route is specified by source/stream + sink/stream. * One option is to turn the video s_stream() operation into a pad operation that takes a stream ID. Streaming would then be controlled on source pads. This could however be a problem for V4L2 output devices, as streaming would then need to be controlled on sink pads. * Another option is turn to pass the full route (as a (sink pad, sink stream, source pad, source stream) tuple) instead of a (pad, stream) couple. This would solve the problem above. * If we base the API on full routes, a core helper function can easily be provided to turn a single pad into a full route specification based on configured routes. * If we base the API on a single pad, subdev drivers can easily find the rull route in a similar fashion if needed. * Do we have a need to start streaming on multiple routes at the same time, or is a single route enough ? * Some stereo cameras use GPIO to start streaming on two sensors synchronously. * Some CSI-2 receivers (or other kind of subdevs in the pipeline) have to be configured with all streams before they can be started. * In that case userspace will need to configure all streams before starting streaming on the first one. * Any attempt to (re)configure a stream while another stream is running would then fail with an error. * We could use pad flags and/or subdev flags to report such subdevs. * Some subdevs have to start/stop streaming as a whole, not per route * If the subdev handles multiple independent streams (e.g. a CSI-2 agregator with multiple independent source sensors), we can start it when the first stream is started. All streams have to be configured before starting the first one, as in the previous case. * If the subdev handles multiple dependent streams (e.g. image and metadata), we should only start streaming when all streams are started, to avoid losing frames on one stream and thus losing synchronization. * Depends on the sensor's s_stream() callback implementation (recursive called driver to driver) * When starting streaming on a multiplexed links, we may need to know beforehand information about all streams that will be started on those links before starting the first one. One example is link frequency selection that requires taking overall bandwidth for all the streams into account. Link frequence is selected through the V4L2_CID_LINK_FREQ control by userspace so the kernel doesn't have to care. Are there other parameters that would need to be computed inside the kernel and take would require gathering information from all streams (possibly bandwidth-related parameters) ? * If a sensor metadata cannot be disabled but the receiver cannot receive it * In case of smiapp driver, the link from metadata sub-device to the mux is static and enabled * Metadata is to be disabled through removing the route in the sensor mux sub-device --- this way the sensor driver knows that streaming is started without metadata stream * If a receiver must be configured with all streams before it can be started * All pipelines through the receiver the must be validated when the first pipeline is started * Add a DEPENDENT flag to the multiplexed pad (or entity) to tell the pipeline start / stop code that other pipelines must be started as well. * Let's make sure that we don't develop a generic over-engineered option that wouldn't be able cover all use cases. We might have no choice but get the subdev drivers involved in the decision, using a recursive .s_stream() call model. * The implementation must be careful to avoid infinite loops when multiple pads with the DEPENDENT flag set are found. * Keep track of pipelines that have seen * Not enough to check whether a DEPENDENT flag has been seen: the pipelines through that pad may be different than on the first DEPENDENT pad seen * Should s_stream() operations be recursive or called from a single location through graph walk ? * The recursive option could lead to stack overflows. * This could be mitigated by using a hybrid model where .s_stream() would be recursive from a top-level point of view, but with sequencial calls for subdevs handled by a single driver. The first .s_stream() handler being called for that subdev group would be responsible for calling .s_stream() sequencially on all other subdevs in the group, as well as recursively for the next subdev outside the group. * The non-recursive option implies a fixed order, it doesn't allow drivers to select whether the source is started first or last. * This could be mitigated by adding subdev/pad flags (or a similar mechanism that would report information to the pipeline walker). * We have to pick one option and migrate existing code. This should be discussed with the upstream community as a standalone topic. Proposal * Ignore formats on pads with multiple streams * User-space will know about pads which carry multiple streams and can act upon that when propegating formats. * User-space can travers the media graph from a sink pad of a muxer to a source pad of the demuxer, or other way around. * Add V4L2 subdevice flag to indicate that a pad have multiple streams and assoicate it with such pads when they are defined * Kernel can verify formats since it have access to the pads with streams to there corresponding pad which do not have streams * Move s_stream() to pad operations structure (just add a new operation for protype nd do not care about exsisting drivers) * Extend s_stream() to carry a 'stream' parameter to indicate which stream of the pad to start. Actions Points * Remove the frame descriptor fields that we don't need for now. (Sakari) * Use the frame descriptors in an upstream driver (R-Car VIN) and see how that works out. (Niklas) * Existing drivers should be checked if they depend on both V4L2_CID_LINK_FREQ and V4L2_CID_PIXEL_RATE. If no such driver exists maybe Documentation/media/kapi/csi2.rst could be updated to state that only one of the two controls are required. * Make sure that pad flags are not directly compared. Logical and must be used instead. (Niklas) * Should we provide helper functions to check the pad type to avoid direct comparisons (e.g. media_pad_is_sink()) ? * Documentation, for each pad operation, which type(s) of pad they apply to (are valid for). A story-like documentation would also be very useful with a few sample use cases. Topics for Thursday * First draft for start streaming and format handling (using R-Car VIN as as test bed/use case) * DT bindings * Media device sharing * Subdev sharing * VIN test bed (MAX9286 + MAX9271 + OV10635) Footnotes [1] Terminology is confusing with pixel format, pixel code, media bus format, ... Strictly speaking out of scope, but we should do something about it.