top of page
Analytics - Best Practice
When using object detection algorithms or machine learning models to analyze images, having high-quality images is crucial for accurate data processing. One of the most important factors to consider is lighting. Proper lighting can significantly enhance image quality, thereby improving the accuracy of the analysis.
This article explores the importance of lighting for image quality and provides guidance on how to optimize camera settings for better object detection performance.
The Importance of Lighting
Lighting plays a vital role in determining the contrast, color, and sharpness of an image. Poor lighting can lead to blurry or distorted images, making it difficult for object detection algorithms to accurately identify objects within the scene.
Optimizing Camera Settings
Several camera settings can be adjusted to optimize image quality for object detection. Key settings include:
Exposure
Exposure controls how much light enters the camera. Increasing exposure can brighten the image but may lead to overexposure and loss of detail in bright areas. Decreasing exposure can result in a darker image but may cause underexposure and loss of detail in darker areas. It is generally recommended to use the auto exposure setting, which adjusts based on lighting conditions.
However, in low-light conditions, auto exposure may not provide enough light, especially at night or in environments with minimal artificial lighting. In these cases, it’s essential to use the right settings to improve image quality.
Most cameras offer several modes, such as Automatic, Manual, and Semi-Automatic. For example, Uniview cameras feature a "Low Motion Blur" mode, which is useful in low-light conditions. This mode allows users to set the slowest acceptable shutter speed, capturing lighter and reducing motion blur.
A recommended starting point for shutter speed is 1/120, provided sufficient light is available. In darker conditions, this can be lowered to 1/60, although some motion blur may be introduced.
Example of Image Adjustment
When exposure is manually adjusted, it’s possible to find a balance between highlights and shadows. For instance, in bright daylight, a faster exposure prevents the image from becoming too bright, but in low-light conditions, this can result in an underexposed image. Setting exposure to auto can offer a good balance for varying environments, but in certain scenarios (like rainy days or nighttime), adjusting the exposure manually may provide better results.
Overexposed Image
A slow shutter speed and high exposure can make an image too bright, causing areas with excessive light to be indistinguishable, potentially hindering object detection in those regions.
Underexposure
When a fast exposure is used, certain areas may appear too dark, and objects could become difficult to detect, as seen in the example where underexposure caused one person in the image to be missed by the detection system.
Image Adjustment
If the image is too dark or objects are undetectable, the camera's image adjustment features can help. Increasing contrast and sharpness can enhance the differentiation between objects. It is generally advised to keep brightness levels balanced and reduce saturation, as lower saturation can improve contrast and assist in more accurate object detection.
White Balance
White balance adjusts the color temperature of an image to make it look natural. Different lighting conditions can cause color shifts, which can impact the accuracy of the analysis. Adjusting white balance ensures the image appears more natural and improves color accuracy for better detection.
Wide Dynamic Range (WDR)
WDR enhances images with significant contrast between light and dark areas. Two types of WDR are commonly available: software WDR and hardware WDR. Hardware WDR is generally more effective in challenging lighting conditions, as it uses specialized hardware to balance brightness and contrast, whereas software WDR relies on software processing.
Enabling WDR can significantly improve image quality in environments with wide light contrasts, but excessive WDR compensation may blur the image, reducing its overall effectiveness. Finding the right balance between WDR compensation and image sharpness is crucial.
Digital Noise Reduction
Digital noise reduction helps remove unwanted noise from images, resulting in clearer and more detailed pictures. There are two types of noise reduction: 2D and 3D.
2D noise reduction removes horizontal and vertical noise, helping to clean up the image.
3D noise reduction eliminates noise across all three dimensions, offering superior results for cleaner, more detailed images.
While increasing noise reduction improves image quality, excessive noise reduction may cause blurring, which can reduce the accuracy of object detection. It is best to adjust the noise reduction settings gradually to find the optimal balance.
Defog
Defog technology enhances visibility in foggy or hazy conditions by increasing contrast and reducing glare. This feature can help make objects more distinguishable when weather conditions compromise image quality.
Note: If the Defog option is unavailable, disabling WDR may enable Defog, as some cameras cannot use both features simultaneously.
Recommendations for Optimizing Camera Settings
To maximize object detection performance, consider the following tips:
Use natural lighting whenever possible. Avoid harsh artificial lighting or direct sunlight, which can create problematic shadows and highlights.
Experiment with camera settings to find the best balance for exposure and white balance. Testing different settings in various environments (e.g., daylight, nighttime, or indoor) helps establish optimal parameters for each scenario.
Conclusion
While features like exposure adjustment, white balance, WDR, noise reduction, and defog can improve image quality, they should be used cautiously. Over-enhancement can introduce artifacts or distortions, negatively impacting object detection accuracy. Striking a balance between image quality and computational efficiency is key. Additionally, considering the camera’s processing power is important when using these advanced features to avoid performance issues. By experimenting and fine-tuning these settings, camera setups can be optimized for better object detection, even in challenging lighting conditions.
Securing remote sites often necessitates the use of 4G mobile networks as the primary means of connectivity. In these scenarios, minimizing camera bandwidth usage becomes essential to adhere to data limits imposed by internet service providers. The 3dEYE platform enables smart object detection on any ONVIF-compatible camera, regardless of video stream resolution, facilitating automated monitoring of remote locations through smart alerts. Under adequate lighting conditions, object detection can identify items as small as 10 pixels wide. Achieving optimal performance at low resolutions requires careful consideration of the following guidelines:
1. Camera Placement:
Positioning the camera to maximize its viewing angle enhances AI accuracy and supports the use of lower resolutions.
Maintaining a clear lens, free from obstructions like trees, ensures consistent detection.
Excessively steep angles can negatively impact object recognition capabilities.
Cameras with narrow fields of view, achieved through larger lenses, are recommended for detecting objects at greater distances.
Cameras installed above 8 feet may capture a wider field of view but often fail to capture detailed features like faces or clothing, which can reduce analytics effectiveness.
Securing the camera on a firm foundation minimizes swaying and false motion detections.
2. Lighting Adjustments:
Providing proper illumination improves contrast. Avoid positioning cameras in shaded areas facing brightly lit regions.
Activating WDR enhances image quality in environments with varying lighting conditions. Using lower WDR values helps prevent motion blur in moving objects.
Avoiding overexposure or underexposure enhances the effectiveness of analytics.
3. Shutter Speed Settings:
High shutter speeds (e.g., 1/100 or faster) are recommended to reduce motion blur caused by moving objects
4. Lowering the frame rate (FPS) can also significantly reduce bandwidth usage while
maintaining monitoring efficiency, particularly in situations where detecting fast-moving objects is unnecessary. Key recommendations include:
Reducing FPS from 12 to 3–5 frames per second can save 30–40% bandwidth, particularly effective for environments such as construction sites.
Many cameras operate more reliably at lower FPS due to decreased CPU consumption.
Extremely low FPS (e.g., 1 or below) is not advised, as it provides minimal additional bandwidth savings while degrading user experience and detection accuracy.
An example of an efficient camera configuration achieving reliable object detection within a bandwidth of 200 Kbps includes:
Detecting individuals within moving vehicles at close range.
Differentiating between moving and stationary vehicles in high-traffic conditions over longer distances.
Operating at a resolution of 640x360 pixels and a frame rate of 5 FPS in scenes with clear contrast and distinguishable objects, further reducing bitrate while maintaining satisfactory results.
These practices support effective monitoring while optimizing network and storage resources.
The primary distinction lies in the availability of Statistical Analysis, which is exclusive to cameras equipped with Advanced Analytics. Basic Analytics, on the other hand, provides information on the type of object detected, its location, size, and color attributes.
Advanced Analytics
Advanced Analytics is supported only by P&P and ONVIF devices and cannot be used with devices added through the Generic option. In addition to object detection and classification, Advanced Analytics calculates additional statistical data such as people counting and heatmaps.
In this mode, the system analyzes the video feed frame by frame for the entire duration of the event after receiving event information from the camera. If motion detection is configured via ONVIF, only objects that intersect the motion detection area by at least 25% will be identified as moving. For cameras that do not support the configuration of the motion detection mask over ONVIF, any moving object in the scene will be marked as a moving object.
Basic Analytics
Like Advanced Analytics, Basic Analytics is also compatible only with P&P and ONVIF devices, excluding devices added through the Generic option. However, it lacks statistical features such as people counting or heatmaps.
When an event is detected by the camera, the system begins frame-by-frame analysis of the video feed for 12 seconds—comprising the first 10 seconds of the event and the 2 seconds before it begins. If no object is detected within this window, any object entering the scene after this 12-second period will not be recognized. This analysis method is not recommended for cameras with a high false-positive rate, as the event may start before the object enters the scene, potentially causing the system to miss the object if it appears more than 10 seconds after the event starts.
Basic Analytics provides details on object size, color, and position within the scene.
Object Tracking Interruptions
AI object tracking continues smoothly as long as the object remains within the camera’s view and does not intersect with other objects. Interruptions in tracking may occur in scenarios like:
A person moving behind obstructions, such as parked cars, barriers, or foliage, temporarily obscuring them from the camera.
A person standing still for several seconds, such as when waiting in a queue without movement.
A timestamp from the camera indicating that the event has stopped, followed by a new timestamp indicating the event has resumed, even though the event is still ongoing.
An additional object entering the camera’s view and partially obstructing the person as it passes by.
For cameras that support motion detection configuration via ONVIF, only objects that intersect with the selected detection area will trigger an alert. A minimum of 25% of an object's area must be within the detection perimeter to activate the alert. Objects outside the designated detection zone will be disregarded, and no alerts will be generated for them.
In the example provided, only objects within the highlighted area will trigger the alert, while people on the sidewalk or cars on the road will not be detected.
Examples of scenarios that may result in poor object detection:
Small and scattered detection areas:
When multiple small detection areas are spread out, the likelihood that an object will have at least 25% of its area within the detection zone is reduced. Smaller objects are more likely to be detected, while larger objects may not have sufficient overlap with the scattered areas to trigger an alert. While this configuration can help reduce false positives, it may also result in missed object detection events due to inadequate overlap.
Designating the entire image area as a detection region:
While this approach ensures that all objects are detected, it may lead to unnecessary alerts. For instance, if the goal is to monitor for people at the front door, setting the entire image area as a motion detection zone will generate alerts for all moving objects, such as people passing by or cars driving down the street. A more effective strategy would be to limit the detection area to the stairs and front porch, targeting only the desired zone of interest.
Proper camera setup is essential for achieving accurate recognition results. Common causes of poor smart analytics performance include low contrast, low resolution, improper camera positioning, and obstructions such as foliage.
Camera Resolution
The minimum resolution required depends on the distance at which objects need to be detected. There is no universal resolution that works for all scenarios.
Even with the same camera model in the same environment, the required resolution may vary depending on the lens being used.
To configure a camera, take screenshots and measure the number of pixels an object of interest occupies at the maximum desired detection distance. If the smallest dimension of the object occupies fewer than 30 pixels, either increase the resolution or switch to a camera with a narrower lens until the object occupies at least 40 pixels.
Image Contrast
Poor image contrast is one of the most common reasons for failed object detection. The scene's illumination can change drastically between day and night or from sunny areas to shaded zones, making it difficult to balance contrast effectively. However, there are some recommendations for optimizing image contrast to improve object detection performance:
Identify the important areas of the image and determine which parts can be overexposed or underexposed. Often, parts of the image such as the sky or distant background are not important for surveillance but can affect how the camera adjusts exposure. This can lead to overexposed or underexposed important areas. Using Wide Dynamic Range (WDR), if available, can resolve this issue by selectively enhancing the contrast of critical areas without making bright areas too intense.
If the camera lacks WDR or if its results are not satisfactory, manual adjustments to exposure, brightness, and contrast should be made. Generally, high-contrast images will yield better results than soft, blurry ones.
Sharpness is another important parameter. While a less sharp image may appear smoother, the "soft" edges of objects can reduce the precision of object detection. Too much sharpness can introduce noise, which reduces the effectiveness of pixels. Sharpness should be adjusted until noise becomes noticeable.
Camera Placement
Camera placement plays a significant role in object detection effectiveness. If a tree or bush blocks the primary path of movement or if the camera is positioned at an odd angle, the shapes of objects may become obscured. For example, if the camera's view is partially obstructed by a bush, a person walking in front of the camera may be mostly hidden and unrecognizable.
Similarly, positioning the camera too high and directing it straight down can make it difficult to identify objects. In this case, the person's shape may be unrecognizable, which reduces the reliability of object detection.
Scenario 1:
Turning off the event detector on the camera after all configurations were set correctly.
To resolve this, enable the appropriate detection module on the camera.
Scenario 2:
Setting the camera event detection on an empty schedule or inverting active hours can cause the camera to miss events, leading to a lack of object detection or the generation of events during incorrect hours.
To fix this, enable the plan and/or adjust alarm active hours to meet the required needs.
Scenario 3:
Setting the incorrect line crossing direction can result in missed events for important occurrences while generating events for unwanted actions.
To avoid this, carefully evaluate the traffic flow direction that is significant for each camera.
Scenario 4:
Selecting the wrong object type for detection or failing to select objects relevant to the camera's location may result in incorrect detections.
Ensure that the correct objects are chosen for each camera, as there is no universal rule that applies to all cameras.
Step 1:
Enable Object Detection (additional charges may apply)
Smart analytics, also known as Server-side analytics or Object detection, depends on how the camera-side events are configured. To activate Object detection, select a plan with "+ Analytics," which can be either a fixed plan or a Pay-as-you-go plan with Analytics. This will unlock the "Object detection Analytics slider" on the camera's Analytics tab.
Step 2:
Choose a trigger event.
Any camera-side event can serve as a trigger for Smart analytics. Click the Add or Edit button to create a new Detection Rule or modify an existing one.
The Act on option determines which camera detection module activates server-side analytics. Visit this page for more information on the logic used for smart analytics.
Step 3:
Select the objects to be detected.
Multiple objects can be chosen for a single rule.
IMPORTANT: For cameras supporting Motion Detection configuration through ONVIF protocol, only objects within the motion detection area will be reported. For all other types of events and cameras without Motion Detection configuration via ONVIF (which do not have a Motion Detection tab), the entire frame is analyzed.
Step 4:
Set the confidence level for alert rules.
The confidence level sets the minimum certainty required for Object detection to trigger an alert, based on the rules defined in the Sensor Alert settings. This will impact alerts sent via email, push notifications, and webhooks.
Step 5:
Optimize the camera for maximum performance.
Although smart analytics can be performed on any quality of video stream, detection reliability improves significantly with optimized camera settings. Consider the following:
Proper placement of event detection zones/lines
Optimizing image quality, including appropriate resolution, frame rate, and light balance
Using the correct "Act on" triggers to focus on important areas and activities
Configuring Sensor Alerts to receive notifications and stay informed.
The following algorithm is followed by the servers:
1. The server waits for an event from the camera, which can be any event configured on the
camera (e.g., motion detection, line crossing, intrusion).
2. Once the event is received, it is checked against the Detection Rules set in the admin portal, under
the Analytics tab of the camera properties.
3. If the event meets the "Act on" requirement, the server begins a frame-by-frame analysis of the
footage.
4. The server scans each frame of the video for the requested objects across the entire image
throughout the event’s duration. The only exception is for cameras that support Motion Detection
configuration through ONVIF; for such cameras, only areas within the Motion mask are analyzed.
5. When the requested object is detected, the analysis stops, and the server waits for the next event
from the camera.
Multiple detection rules can be configured for a single camera.
Note: For cameras supporting motion detection area configuration via ONVIF, object detection will focus on objects overlapping with the selected detection area. Objects will only be reported if they are at least 25% within the area. This approach helps reduce unwanted object alerts. For example, if a camera covers both a yard and a highway, but only cars entering the yard are of interest, this logic will ignore cars on the highway and only trigger alerts when a car enters the yard. If the analysis were done on the entire image each time the camera's motion detector is triggered, any car on the highway would be reported, resulting in unwanted alerts.
For cameras that do not support ONVIF motion detection protocols (such as Axis and Vivotek), the analysis is performed on the whole image. In this case, any moving object will be reported, as these cameras do not provide information about active areas, making it impossible to filter out unwanted events based on the motion mask.
bottom of page