Are you happy with your logging solution? Would you help us out by taking a 30-second survey? Click here


Composite Cartoon Eyes over Face from Front Camera with CoreImage

Subscribe to updates I use CartoonEyes

Statistics on CartoonEyes

Number of watchers on Github 126
Number of open issues 1
Average time to close an issue 1 day
Main language Swift
Average time to merge a PR about 6 hours
Open pull requests 1+
Closed pull requests 0+
Last commit over 3 years ago
Repo Created almost 4 years ago
Repo Last Updated over 1 year ago
Size 402 KB
Organization / Authorflexmonkey
Page Updated
Do you use CartoonEyes? Leave a review!
View CartoonEyes activity
View on github
Fresh, new opensource launches 🚀🚀🚀
Trendy new open source projects in your inbox! View examples

Subscribe to our mailing list

Evaluating CartoonEyes for your project? Score Explanation
Commits Score (?)
Issues & PR Score (?)


Composite Cartoon Eyes over Face from Front Camera with CoreImage

Companion project to:


Here's some festive silliness: CartoonEyes composites cartoon eyeballs over a face captured by an iOS devices's front camera and then passes that composite through a Core Image Comic Effect filter. It makes use of Core Image's face detection class, CIDetector, to find the positions of the eyes.

Capturing Video from Front Camera

I played with applying Core Image filters to a live camera feed earlier this year (see: Applying CIFilters to a Live Camera Feed with Swift). The main difference with this project is that I wanted to use the front camera rather than the default camera. To do this, I use a guard statement to filter the the devices by their position and pick the first item of that filter as my device:

    guard let frontCamera = (AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo) as! [AVCaptureDevice])
    .filter({ $0.position == .Front })
    .first else
        fatalError("Unable to access front camera")

Compositing Cartoon Eyes

Once I have the source image from the camera, it's time to composite the cartoon eyes over it. My function, eyeImage(), takes the original camera image (to calculate the eye positions), an image to composite the eye over and a Boolean indicating whether it's looking at the left or right eye:

    func eyeImage(cameraImage: CIImage, backgroundImage: CIImage, leftEye: Bool) -> CIImage

It's called twice:

    let leftEyeImage = eyeImage(cameraImage, backgroundImage: cameraImage, leftEye: true)
    let rightEyeImage = eyeImage(cameraImage, backgroundImage: leftEyeImage, leftEye: false)

The first call creates an image compositing the left eye over the original camera image and the second call composites the right eye over the previous left eye composite.

Before I can invoke eyeImage(), I need a CIDetector instance which will give me the positions of the facial features:

    lazy var ciContext: CIContext =
        [unowned self] in

        return  CIContext(EAGLContext: self.eaglContext)

    lazy var detector: CIDetector =
        [unowned self] in

        CIDetector(ofType: CIDetectorTypeFace, context: self.ciContext, options: nil)


Inside eyeImage(), I first create a composite Core Image filter and a transform filter:

    let compositingFilter = CIFilter(name: "CISourceAtopCompositing")!
    let transformFilter = CIFilter(name: "CIAffineTransform")!

I also need to calculate the midpoint of the cartoon eye image as an offset for the transform:

    let halfEyeWidth = eyeballImage.extent.width / 2
    let halfEyeHeight = eyeballImage.extent.height / 2

Now, using the detector, I need the features of the first face in the image and, depending on the leftEye argument, whether it has the eye position I need:

    if let features = detector.featuresInImage(cameraImage).first as? CIFaceFeature
        where leftEye ? features.hasLeftEyePosition : features.hasRightEyePosition

If it does, I'll need to create a transform for the transform filter to position the cartoon eye image based on the real eye position:

    let eyePosition = CGAffineTransformMakeTranslation(
        leftEye ? features.leftEyePosition.x - halfEyeWidth : features.rightEyePosition.x - halfEyeWidth,
        leftEye ? features.leftEyePosition.y - halfEyeHeight : features.rightEyePosition.y - halfEyeHeight)

...and now I can execute the transform filter and get an image of the cartoon eye positioned over the real eye:

    transformFilter.setValue(eyeballImage, forKey: "inputImage")
    transformFilter.setValue(NSValue(CGAffineTransform: eyePosition), forKey: "inputTransform")

    let transformResult = transformFilter.valueForKey("outputImage") as! CIImage

Finally I composite the transformed cartoon eye over the supplied background image and return that result:

    compositingFilter.setValue(backgroundImage, forKey: kCIInputBackgroundImageKey)
    compositingFilter.setValue(transformResult, forKey: kCIInputImageKey)

    return  compositingFilter.valueForKey("outputImage") as! CIImage

If there were no facial features detected, I simply return the background image as it was supplied.

Displaying Output with OpenGL ES

Rather than converting the output to a UIImage and displaying it in a UIImageView, this project takes the more direct route and renders the output using a GLKView. Once the captureOutput method of my AVCaptureVideoDataOutputSampleBufferDelegate has its image, it invalidates the GLKView's display in a foreground thread:


Since my view controller is the GLKView's GLKViewDelegate, this invokes glkView() where I call eyeImage() and pass the right eye's image into a comic effect Core Image filter and draw that output the the GLKView's context:

    let leftEyeImage = eyeImage(cameraImage, backgroundImage: cameraImage, leftEye: true)
    let rightEyeImage = eyeImage(cameraImage, backgroundImage: leftEyeImage, leftEye: false)

    comicEffect.setValue(rightEyeImage, forKey: kCIInputImageKey)

    let outputImage = comicEffect.valueForKey(kCIOutputImageKey) as! CIImage

        inRect: CGRect(x: 0, y: 0,
            width: imageView.drawableWidth,
            height: imageView.drawableHeight),
        fromRect: outputImage.extent)


With very little work, the combination of Core Image filters and detectors combined with AV Capture Session has allowed me to composite different images based on the positions of different facial features.

As always, the source code for this project is available at my GitHub repository here. Enjoy!

CartoonEyes open pull requests (View All Pulls)
  • Fix broken headings in Markdown files
CartoonEyes list of languages used
Other projects in Swift