Legal & Compliance

Image enhancements with AI - what is allowed?

Follow us on LinkedIn LinkedIn
Time and again, companies show interest in flawless portrait photos - especially when only low-resolution or poorly exposed images are available. At first glance, generative AI seems to be the ideal solution here, as it can greatly improve the look of images and enhance them aesthetically. However, the image is completely regenerated synthetically. This poses a considerable security and compliance risk for identity-relevant photos.

What is generative AI and how does it work?

Generative AI, often known in connection with LLMs such as ChatGPT or Mistral, is also used in image generation. These image-generating model families (e.g. diffusion models) can generate new image content completely synthetically from existing inputs, for example an existing image and a text prompt. Unlike traditional image processing, which corrects colors or removes backgrounds, generative AI creates an image that no longer originates pixel by pixel from the original, but only looks like it. The model reconstructs faces, interprets missing details or parts of the face and fills in information based on statistical probabilities.

The poorer the original image quality, the more the AI has to "invent". This increases the risk that the result no longer corresponds exactly to the real face. We regularly address this problem at Photo Collect when we advise companies on the use of AI in ID processes.

Advantages of generative AI and the disadvantages of ID card images

Generative AI has many advantages in the creative field:

  • It can generate aesthetically perfect images - good lighting, perfect skin surfaces, flawless teeth, a clear gaze.
  • It automatically corrects distracting elements, e.g. reflections, glasses, jewelry.
  • Replaces existing clothing, e.g. a T-shirt with a shirt or uniform.

Disadvantages for ID card images:

  • The generated version no longer shows the real biometric face.
  • Small changes in eye distance, jaw shape, pupils or head posture can be enough to impair biometric systems.
  • There is a risk that the AI "optimizes" a person, but becomes more similar to another person than the original.

Therefore: Since ID card images are an identity feature, any artificial reconstruction is a significant security and compliance risk. To obtain suitable images, it is better to use analytical AI checks that prevent, for example, capturing with an incorrect head position.

Can generative AI images be used for ID cards?

For official ID cards (ID, passport, driver's license), the answer is clear: No.
Generative AI is not permitted, as it can falsify biometric identifiability. Most international standards(ICAO, ISO) require an unaltered, photographic image of the real person.

However, the use of generatively enhanced images in companies is also strongly discouraged from a legal and safety point of view:

  1. Risk of manipulation: There is a high risk that generated images do not correspond exactly to the real person.
  2. Security requirements: In many companies, badges are used as a means of access. An AI-altered identity weakens access security.
  3. Data protection & compliance: Data sets that artificially alter faces can be considered non-compliant biometric processing.

For this reason, Photo Collect works exclusively with non-generative processes:

  • Automatic quality and biometric checks to verify the specified standards (e.g. pixel eye distance, head alignment, smile).
  • Background removal, cropping, rotation, head centering for image enhancement without changing the face itself.

The faces always remain true to the original. This is crucial in order to correctly meet the requirements of companies, authorities or safety-critical organizations.

Are generatively enhanced AI images deepfakes?

The term deepfake is usually used for improperly manipulated or falsified videos/images. Technically speaking, however, generatively created portrait photography falls into the same category:

  • It is not authentic, but synthetic.
  • She can credibly portray a person who does not exist.
  • It undermines the assignability between face and real identity.

In the context of HR, access control and corporate security, such an image is therefore a functional deepfake, regardless of whether the intention was malicious or not.

A generatively modified employee ID card could unintentionally deceive genuine controls and, in the worst case, facilitate access by an unauthorized person.

Why Photo Collect deliberately does not use generative AI

Photo Collect uses AI exclusively for quality checks, biometric analysis and technical image processing - not for modifying faces. This ensures that:

  • the identity of a person is correctly depicted
  • all requirements for ID card images are met
  • Companies meet their compliance and security requirements

HR departments and corporate security teams in particular benefit from the fact that the entire process remains scalable, secure and legally clean. Without the risk of synthetic identity changes. In addition, the data remains in Switzerland at all times and is only processed for the contractual purpose. We address further legal issues relating to employee photos in this article.

Left: poor image quality. Center: Version improved by AI. Right: original image.

Banal snapshots become "professional" photos: On the right, the GenAI version of FLUX.2 Pro. What looks impressive is legally tricky.
Follow us on LinkedIn LinkedIn

Further contributions