Processing images means applying filters—an image filter is a piece of software that examines an input image pixel by pixel and algorithmically applies some effect in order to create an output image. In Core Image, image processing relies on the CIFilter
and CIImage
classes, which describe filters and their input and output. To apply filters and display or export results, you can make use of the integration between Core Image and other system frameworks, or create your own rendering workflow with the CIContext
class. This chapter covers the key concepts for working with these classes to apply filters and render results.
It has since grown on Classic Mac OS to be a very powerful image processing platform. Additionally, OMA is used in many labs as the front end interface for controlling I/O devices such as cameras, Digital I/O cards and stepper motors. OMA is a powerful control, acquisition and processing system for photometric images. Image Processing Toolbox™ provides a comprehensive set of reference-standard algorithms and workflow apps for image processing, analysis, visualization, and algorithm development. You can perform image segmentation, image enhancement, noise reduction, geometric transformations, image registration, and 3D image processing. Image: Advanced Image Processing for Mac OS X Developers have this security option selected. If you plan to distribute your app outside of the Mac App Store, be sure to test the installation of your app on a Gatekeeper enabled system so that you can provide a good user experience.
Overview
There are many ways to use Core Image for image processing in your app. Listing 1-1 shows a basic example and provides pointers to further explanations in this chapter.
Listing 1-1 The basics of applying a filter to an image
Here’s what the code does:
Create a
CIContext
object (with default options). You don’t always need your own Core Image context—often you can integrate with other system frameworks that manage rendering for you. Creating your own context lets you more precisely control the rendering process and the resources involved in rendering. Contexts are heavyweight objects, so if you do create one, do so as early as possible, and reuse it each time you need to process images. (See Building Your Own Workflow with a Core Image Context.)Instantiate a
CIFilter
object representing the filter to apply, and provide values for its parameters. (See Filters Describe Image Processing Effects.)Create a
CIImage
object representing the image to be processed, and provide it as the input image parameter to the filter. Reading image data from a URL is just one of many ways to create an image object. (See Images are the Input and Output of Filters.)Get a
CIImage
object representing the filter’s output. The filter has not yet executed at this point—the image object is a “recipe” specifying how to create an image with the specified filter, parameters, and input. Core Image performs this recipe only when you request rendering. (See Images are the Input and Output of Filters.)Render the output image to a Core Graphics image that you can display or save to a file. (See Building Your Own Workflow with a Core Image Context.)
Images are the Input and Output of Filters
Core Image filters process and produce Core Image images. A CIImage
instance is an immutable object representing an image. These objects don’t directly represent image bitmap data—instead, a CIImage
object is a “recipe” for producing an image. One recipe might call for loading an image from a file; another might represent output from a filter, or from a chain of filters. Core Image performs these recipes only when you request that an image be rendered for display or output.
To apply a filter, create one or more CIImage
objects representing the images to be processed by the filter, and assign them to the input parameters of the filter (such as kCIInputImageKey
). You can create a Core Image image object from nearly any source of image data, including:
URLs referencing image files to be loaded or
NSData
objects containing image file dataQuartz2D, UIKit, or AppKit image representations (
CGImageRef
,UIImage
, orNSBitmapImageRep
objects)Metal, OpenGL, or OpenGL ES textures
CoreVideo image or pixel buffers (
CVImageBufferRef
orCVPixelBufferRef
)IOSurfaceRef
objects that share image data between processesImage bitmap data in memory (a pointer to such data, or a
CIImageProvider
object that provides data on demand)
For a full list of ways to create a CIImage
object, see CIImage Class Reference.
Because a CIImage
object describes how to produce an image (instead of containing image data), it can also represent filter output. When you access the outputImage
property of a CIFilter
object, Core Image merely identifies and stores the steps needed to execute the filter. Those steps are performed only when you request that the image be rendered for display or output. You can request rendering either explicitly, using one of the CIContext
render
or draw
methods (see Building Your Own Workflow with a Core Image Context), or implicitly, by displaying an image using one of the many system frameworks that work with Core Image (see Integrating with Other Frameworks).
Deferring processing until rendering time makes Core Image fast and efficient. At rendering time, Core Image can see if more than one filter needs to be applied to an image. If so, it automatically concatenates multiple “recipes” and organizes them to eliminate redundant operations, so that each pixel is processed only once rather than many times.
Filters Describe Image Processing Effects
An instance of the CIFilter
class is a mutable object representing an image processing effect and any parameters that control that effect’s behavior. To use a filter, you create a CIFilter
object, set its input parameters, and then access its output image (see Images are the Input and Output of Filters below). Call the filterWithName:
initializer to instantiate a filter object using the name of a filter known to the system (see Querying the System for Filters or Core Image Filter Reference).
Most filters have one or more input parameters that let you control how processing is done. Each input parameter has an attribute class that specifies its data type, such as NSNumber
. An input parameter can optionally have other attributes, such as its default value, the allowable minimum and maximum values, the display name for the parameter, and other attributes described in CIFilter Class Reference. For example, the CIColorMonochrome filter has three input parameters—the image to process, a monochrome color, and the color intensity.
Filter parameters are defined as key-value pairs; to work with parameters, you typically use the valueForKey:
and setValue:forKey:
methods or other features that build upon key-value coding (such as Core Animation). The key is a constant that identifies the attribute and the value is the setting associated with the key. Core Image attribute values typically use one of the data types listed in Attribute value data types .
Data Type | Object | Description |
---|---|---|
Strings | Text, typically for display to the user | |
Floating-point values | A scalar value, such as an intensity level or radius | |
Vectors | A set of floating-point values that can specify positions, sizes, rectangles, or untagged color component values | |
Colors | A set of color component values, tagged with a color space specifying how to interpret them | |
Images | An image; see Images are the Input and Output of Filters | |
Transforms |
| A coordinate transformation to apply to an image |
Important:CIFilter
objects are mutable, so you cannot safely share them between different threads. Each thread must create its own CIFilter
objects. However, a filter’s input and output CIImage
objects are immutable, and thus safe to pass between threads.
Chaining Filters for Complex Effects
Every Core Image filter produces an output CIImage
object, so you can use this object as input to another filter. For example, the sequence of filters illustrated in Figure 1-1 applies a color effect to an image, then adds a glow effect, and finally crops a section out of the result.
Core Image optimizes the application of filter chains such as this one to render results quickly and efficiently. Each CIImage
object in the chain isn’t a fully rendered image, but instead merely a “recipe” for rendering. Core Image doesn’t need to execute each filter individually, wasting time and memory rendering intermediate pixel buffers that will never be seen. Instead, Core Image combines filters into a single operation, and can even reorganize filters when applying them in a different order will produce the same result more efficiently. Figure 1-2 shows a more accurate rendition of the example filter chain from Figure 1-1.
Notice that in Figure 1-2, the crop operation has moved from last to first. That filter results in large areas of the original image being cropped out of the final output. As such, there’s no need to apply the color and sharpen filters to those pixels. By performing the crop first, Core Image ensures that expensive image processing operations apply only to pixels that will be visible in final output.
Listing 1-2 shows how to set up a filter chain like that illustrated above.
Listing 1-2 Creating a filter chain
Listing 1-2 also shows a few different convenience methods for configuring filters and accessing their results. In summary, you can use any of these methods to apply a filter, either individually or as part of a filter chain:
Create a
CIFilter
instance with thefilterWithName:
initializer, set parameters using thesetValue:forKey:
method (including thekCIInputImageKey
for the image to process), and access the output image with theoutputImage
property. (See Listing 1-1.)Create a
CIFilter
instance and set its parameters (including the input image) in one call with thefilterWithName:withInputParameters:
initializer, then use the theoutputImage
property to access output. (See thecolorFilter
example in Listing 1-2.)Apply a filter without creating a
CIFilter
instance by using theimageByApplyingFilter:withInputParameters:
method to aCIImage
object. (See the bloomImage example in Listing 1-2.)For certain commonly used filter operations, such as cropping, clamping, and applying coordinate transforms, use other
CIImage
instance methods listed in Creating an Image by Modifying an Existing Image. (See the croppedImage example in Listing 1-2.)
Using Special Filter Types for More Options
Most of the built-in Core Image filters operate on a main input image (possibly with additional input images that affect processing) and create a single output image. But there are several additional types of that you can use to create interesting effects or combine with other filters to produce more complex workflows.
A compositing (or blending) filters combine two images according to a preset formula. For example:
The CISourceInCompositing filter combines images such that only the areas that are opaque in both input images are visible in the output image.
The CIMultiplyBlendMode filter multiplies pixel colors from both images, producing a darkened output image.
For the complete list of compositing filters, query the CICategoryCompositeOperation category.
Note: You can arrange input images before compositing them by applying geometry adjustments to each. See the CICategoryGeometryAdjustment filter category or the
imageByApplyingTransform:
method.A generator filters take no input images. Instead, these filters use other input parameters to create a new image from scratch. Some generators produce output that can be useful on its own, and others can be combined in filter chains to produce more interesting images. Some examples from among the built-in Core Image filters include:
Filters like CIQRCodeGenerator and CICode128BarcodeGenerator generate barcode images that encode specified input data.
Filters like CIConstantColorGenerator, CICheckerboardGenerator, and CILinearGradient generate simple procedural images from specified colors. You can combine these with other filters for interesting effects—for example, the CIRadialGradient filter can create a mask for use with the CIMaskedVariableBlur filter.
Filters like CILenticularHaloGenerator and CISunbeamsGenerator create standalone visual effects—combine these with compositing filters to add special effects to an image.
To find generator filters, query the CICategoryGenerator and CICategoryGradient categories.
A reduction filter operates on an input image, but instead of creating an output image in the traditional sense, its output describes information about the input image. For example:
The CIAreaMaximum filter outputs a single color value representing the brightest of all pixel colors in a specified area of an image.
The CIAreaHistogram filter outputs information about the numbers of pixels for each intensity value in a specified area of an image.
All Core Image filters must produce a
CIImage
object as their output, so the information produced by a reduction filter is still an image. However, you usually don’t display these images—instead, you read color values from single-pixel or single-row images, or use them as input to other filters.For the complete list of reduction filters, query the CICategoryReduction category.
A transition filter takes two input images and varies its output between them in response to an independent variable—typically, this variable is time, so you can use a transition filter to create an animation that starts with one image, ends on another, and progresses from one to the other using an interesting visual effect. Core Image provides several built-in transition filters, including:
The CIDissolveTransition filter produces a simple cross-dissolve, fading from one image to another.
The CICopyMachineTransition filter simulates a photocopy machine, swiping a bar of bright light across one image to reveal another.
For the complete list of transition filters, query the CICategoryTransition category.
Integrating with Other Frameworks
Core Image interoperates with several other technologies in iOS, macOS, and tvOS. Thanks to this tight integration, you can use Core Image to easily add visual effects to games, video, or images in your app’s user interface without needing to build complex rendering code. The following sections cover several of the common ways to use Core Image in an app and the conveniences system frameworks provide for each.
Processing Still Images in UIKit and AppKit
UIKit and AppKit provide easy ways to add Core Image processing to still images, whether those images appear in your app’s UI or are part of its workflow. For example:
A travel app might present stock photography of destinations in a list, then apply filters to those images to create a subtle background for each destination’s detail page.
A social app might apply filters to user avatar pictures to indicate mood for each post.
A photography app might allow the user to customize images with filters upon capture, or offer a Photos app extension for adding effects to pictures in the user’s Photos library (see Photo Editing in App Extension Programming Guide).
Note: Don’t use Core Image to create blur effects that are part of a user interface design (like those seen in the translucent sidebars, toolbars, and backgrounds of the macOS, iOS, and tvOS system interfaces). Instead, see the NSVisualEffectView
(macOS) or UIVisualEffectView
(iOS/tvOS) classes, which automatically match the system appearance and provide efficient real-time rendering.
In iOS and tvOS, you can apply Core Image filters anywhere you work with UIImage
objects. Listing 1-3 shows a simple method for using filters with an image view.
Listing 1-3 Applying a filter to an image view (iOS/tvOS)
In macOS, use the initWithBitmapImageRep:
method to create CIImage
objects from bitmap images and the NSCIImageRep
class to create images you can use anywhere NSImage
objects are supported.
Processing Video with AV Foundation
The AVFoundation framework provides a number of high level utilities for working with video and audio content. Among these is the AVVideoComposition
class, which you can use to combine or edit video and audio tracks into a single presentation. (For general information on compositions, see Editing in AVFoundation Programming Guide.) You can use an AVVideoComposition
object to apply Core Image filters to each frame of a video during playback or export, as shown in Listing 1-4.
Listing 1-4 Applying a filter to a video composition
When you create a composition with the videoCompositionWithAsset:applyingCIFiltersWithHandler:
initializer, you supply a handler that’s responsible for applying filters to each frame of video. AVFoundation automatically calls your handler during playback or export. In the handler, you use the provided AVAsynchronousCIImageFilteringRequest
object first to retrieve the video frame to be filtered (and supplementary information such as the frame time), then to provide the filtered image for use by the composition.
To use the created video composition for playback, create an AVPlayerItem
object from the same asset used as the composition’s source, then assign the composition to the player item’s videoComposition
property. To export the composition to a new movie file, create an AVAssetExportSession
object from the same source asset, then assign the composition to the export session’s videoComposition
property.
Tip:Listing 1-4 also shows another useful Core Image technique. By default, a blur filter also softens the edges of an image by blurring image pixels together with the transparent pixels that (in the filter’s image processing space) surround the image. This effect can be undesirable in some circumstances, such as when filtering video.
To avoid this effect, use the imageByClampingToExtent
method (or the CIAffineClamp filter) to extend the edge pixels of the image infinitely in all directions before blurring. Clamping creates an image of infinite size, so you should also crop the image after blurring.
Processing Game Content with SpriteKit and SceneKit
SpriteKit is a technology for building 2D games and other types of apps that feature highly dynamic animated content; SceneKit is for working with 3D assets, rendering and animating 3D scenes, and building 3D games. (For more information on each technology, see SpriteKit Programming Guide and SceneKit Framework Reference.) Both frameworks provide high-performance real-time rendering, with easy ways to add Core Image processing to all or part of a scene.
In SpriteKit, you can add Core Image filters using the SKEffectNode
class. To see an example of this class in use, create a new Xcode project using the Game template (for iOS or tvOS), select SpriteKit as the game technology, and modify the touchesBegan:withEvent:
method in the GameScene
class to use the code in Listing 1-5. (For the macOS Game template, you can make similar modifications to the mouseDown:
method.)
Listing 1-5 Applying filters in SpriteKit
Note that the SKScene
class itself is a subclass of SKEffectNode
, so you can also apply a Core Image filter to an entire SpriteKit scene.
In SceneKit, the filters
property of the SCNNode
class can apply Core Image filters to any element of a 3D scene. To see this property in action, create a new Xcode project using the Game template (for iOS, tvOS, or macOS), select SceneKit as the game technology, and modify the viewDidLoad
method in the GameViewController
class to use the code in Listing 1-6.
Listing 1-6 Applying filters in SceneKit
You can also animate filter parameters on a SceneKit node—for details, see the reference documentation for the filters
property.
In both SpriteKit and SceneKit, you can use transitions to change a view’s scene with added visual flair. (See the presentScene:transition:
method for SpriteKit and the presentScene:withTransition:incomingPointOfView:completionHandler:
method for SceneKit.) Use the SKTransition
class and its transitionWithCIFilter:duration:
initializer to create a transition animation from any Core Image transition filter.
Processing Core Animation Layers (macOS)
In macOS, you can use the filters
property to apply filters to the contents of any CALayer
-backed view, and add animations that vary filter parameters over time. See Filters Add Visual Effects to OS X Views and Advanced Animation Tricks in Core Animation Programming Guide.
Building Your Own Workflow with a Core Image Context
When you apply Core Image filters using the technologies listed in the previous section, those frameworks automatically manage the underlying resources that Core Image uses to process images and render results for display. This approach both maximizes performance for those workflows and makes them easier to set up. However, in some cases it’s more prudent to manage those resources yourself using the CIContext
class. By managing a Core Image context directly, you can precisely control your app’s performance characteristics or integrate Core Image with lower-level rendering technologies.
A Core Image context represents the CPU or GPU computing technology, resources, and settings needed to execute filters and produce images. Several kinds of contexts are available, so you should choose the option that best fits your app’s workflow and the other technologies you may be working with. The sections below discuss some common scenarios; for the full set of options, see CIContext Class Reference.
Important: A Core Image context is a heavyweight object managing a large amount of resources and state. Repeatedly creating and destroying contexts has a large performance cost, so if you plan to perform multiple image processing operations, create a context early on and store it for future reuse.
Rendering with an Automatic Context
If you don’t have constraints on how your app interoperates with other graphics technologies, creating a Core Image context is simple: just use the basic init
or initWithOptions:
initializer. When you do so, Core Image automatically manages resources internally, choosing the appropriate or best available CPU or GPU rendering technology based on the current device and any options you specify. This approach is well-suited to tasks such as rendering a processed image for output to a file (for example, with the writeJPEGRepresentationOfImage:toURL:colorSpace:options:error:
method).
Note: A context without an explicitly specified rendering destination cannot use the drawImage:inRect:fromRect:
method, because that method’s behavior changes depending on the rendering destination in use. Instead, use the CIContext
methods whose names begin with render
or create
to specify an explicit destination.
Take care when using this approach if you intend to render Core Image results in real time—that is, to animate changes in filter parameters, to produce an animated transition effect, or to process video or other visual content that already renders many times per second. Even though a CIContext
object created with this approach can automatically render using the GPU, presenting the rendered results may involve expensive copy operations between CPU and GPU memory.
Real-Time Rendering with Metal
The Metal framework provides low-overhead access to the GPU, enabling high performance for graphics rendering and parallel compute workflows. Such workflows are integral to image processing, so Core Image builds upon Metal wherever possible. If you’re building an app that renders graphics with Metal, or if you want to leverage Metal to get real-time performance for animating filter output or filtering animated input (such as live video), use a Metal device to create your Core Image context.
Listing 1-7 and Listing 1-8 show an example of using a MetalKit view (MTKView
) to render Core Image output. (Important steps are numbered in each listing and described afterward.)
Listing 1-7 Setting up a Metal view for Core Image rendering
This example uses a
UIViewController
subclass for iOS or tvOS. To use with macOS, subclassNSViewController
instead.The
sourceTexture
property holds a Metal texture containing the image to be processed by the filter. This example doesn’t show loading the texture’s content because there are many ways to fill a texture—for example, you could load an image file using theMTKTextureLoader
class, or use the texture as output from an earlier rendering pass of your own.Create the Metal objects needed for rendering—a
MTLDevice
object representing the GPU to use, and a command queue to execute render and compute commands on that GPU. (This command queue can handle both the render or compute commands encoded by Core Image and those from any additional rendering passes of your own.)Configure the MetalKit view.
Important: Always set the
framebufferOnly
property toNO
when using a Metal view, layer, or texture as a Core Image rendering destination.Create a Core Image context that uses the same Metal device as the view. By sharing Metal resources, Core Image can process texture contents and render to the view without the performance costs of copying image data to and from separate CPU or GPU memory buffers.
CIContext
objects are expensive to create, so you do so only once and reuse it each time you process images.
MetalKit calls the drawInMTKView:
method each time the view needs displaying. (By default, MetalKit may call this method as many as 60 times per second. For details, see the view’s preferredFramesPerSecond
property.) Listing 1-8 shows a basic implementation of that method for rendering from a Core Image context.
Listing 1-8 Drawing with Core Image filters in a Metal view
Obtain a Metal drawable texture to render into and a command buffer to encode rendering commands.
Configure the filter’s input parameters, including the input image sourced from a Metal texture. This example uses constant parameters, but remember that this method runs up to 60 times per second—you can use this opportunity to vary filter parameters over time to create smooth animations.
Tell the Core Image context to render the filter output into the view’s drawable texture. The bounds parameter tells Core Image what portion of the image to draw—this example uses the input image’s dimensions.
Tell Metal to display the rendered image when the command buffer finishes executing.
This example shows only the minimal code needed to render with Core Image using Metal. In a real application, you’d likely perform additional rendering passes before or after the one managed by Core Image, or render Core Image output into a secondary texture and use that texture in another rendering pass. For more information on drawing with Metal, see Metal Programming Guide.
Real-Time Rendering with OpenGL or OpenGL ES
Core Image can also use OpenGL (macOS) or OpenGL ES (iOS and tvOS) for high-performance, GPU-based rendering. Use this option if you need to support older hardware where Metal is not available, or if you want to integrate Core Image into an existing OpenGL or OpenGL ES workflow.
If you draw using OpenGL ES (in iOS or tvOS), use the
contextWithEAGLContext:options:
initializer to create a Core Image context from theEAGLContext
you use for rendering.If you draw using OpenGL (in macOS), use the
contextWithCGLContext:pixelFormat:colorSpace:options:
initializer create a Core Image context from the OpenGL context you use for rendering. (See the reference documentation for that method for important details on pixel formats.)
In either scenario, use the imageWithTexture:size:flipped:colorSpace:
initializer to create CIImage
objects from OpenGL or OpenGL ES textures. Working with image data that’s already in GPU memory improves performance by removing redundant copy operations.
To render Core Image output in OpenGL or OpenGL ES, make your GL context current and set a destination framebuffer, then call the drawImage:inRect:fromRect:
method.
CPU-Based Rendering with Quartz 2D
If your app doesn’t require real-time performance and draws view content using CoreGraphics (for example, in the drawRect:
method of a UIKit or AppKit view), use the contextWithCGContext:options:
initializer to create a Core Image context that works directly with the Core Graphics context you’re already using for other drawing. (In macOS, use the CIContext
property of the current NSGraphicsContext
object instead.) For information on CoreGraphics contexts, see Quartz 2D Programming Guide.
Copyright © 2004, 2016 Apple Inc. All Rights Reserved. Terms of Use | Privacy Policy | Updated: 2016-09-13
Post-processing astrophotography images is mandatory: you cannot avoid it. It can be a fairly long and technical process, but it is necessary to squeeze out the most you can from your images.
Everything begins with pre-processing your images, a step including image calibration and image stacking, which we have already covered in this article.
After that, it is time to post-process your stacked image with your software of choice. In this article, we will discuss the different options that are available to post-process your astrophotography images.
Note: Don’t miss the detailed video at the end of this article, It was created to help show you how to process your images with some of the software mentioned in this article.
Click here to skip to our Image Processing Demo Video.
What Does Post-Processing Mean In Astrophotography?
In astrophotography, the post-processing includes steps that are crucial to the quality of the final image. Those steps can be summarized as:
- Histogram stretching
- Gradients and light pollution removal
- Stars color calibration
- Stars reduction and Stars removal
- Sharpening and noise reduction
- Final tweaks
Of the steps mentioned above, it is worth to spend a few words on the Histogram Stretching, as it is of utmost importance in deep-sky astrophotography.
What Histogram Stretching Is And Why You Need It
With image stacking, you have combined all your light frames (the actual images of the sky) into a single image with an enhanced signal-to-noise ratio.
With deep sky astrophotography, this stacked image can be surprisingly dark, with only a few bright stars visible.
There is nothing wrong with it, as all the details and information are there, but hidden in the dark background. And this is why this process also goes under the name of background extraction.
Image Processing For Mac Free
Histogram stretching can be done manually using Adobe Photoshop or in automatic/semi-automatic way using astrophotography software such as Astro Pixel Processor, Star Tools, or His Majesty PixInsight.
A rigorous explanation on how digital data are recorded and how the histogram works can become fairly technical and is beyond the scope of this article.
To keep it simple, let’s say that when you perform the stretching of the histogram, you are broadening the histogram. Thus pushing details that were crammed in the blacks towards the middle tones.
And you do that slowly, in small steps, to ensure retaining the best possible image quality.
The process allows us to take full advantage of the image stacking process, and it results in a cleaner, brighter image with a lot of details that were not visible (or barely visible) in the single exposures.
Image Processing For Mac
Stars Reduction / Stars Removal
Star reduction is another process that is standard when editing deep-sky astrophotography.
While it seems odd that you want to shrink or remove stars from a photo about stars, this process aims to make the multitude of visible stars in the image less imposing and distracting.
By reducing enlarged stars due to the histogram stretching and by removing the smallest stars, you make the deep sky objects in the image more visible, as shown in the image below.
The procedure is particularly useful when shooting deep-sky objects, such as nebulae, that are in the Milky Way Band.
Software For Astrophotography Post-Processing
We can group the software for astrophotography post-processing in two categories:
- generic photo editors, such as Photoshop, Gimp, Affinity Photo, etc.
- Astrophotography editors, such as StarTools, Nebulosity, Astro Pixel Processor, Pixinsight, etc.
The main advantage of generic photo editors over specific astrophotography editors is versatility.
With a generic photo editor, it is easy to post-process all kinds of astrophotography, from deep-sky imaging to lunar and planetary shots, passing for star trails and starry landscapes.
In this article, for example, we discussed how to stack starry landscape images in Photoshop.
Not many astrophotography editors are this flexible.
Here is a list of software that are most commonly used to post-process astrophotography images.
Adobe Lightroom CC
Generic Photo Editor | Commercial From $9.99 Subscription Plan | Windows, Mac OS X, IOS
Pros
- Easy to use
- Powerful image development and image organizer
- Easy integration with Photoshop
- Can use photographic plugins
Cons
- Can’t do the complex editing needed for astrophotography (histogram stretching, Stars Reduction, etc)
- Limited to cosmetic tweaks
Adobe Lightroom is a popular, easy to use and fairly powerful RAW developer and image organizer.
Its usefulness in astrophotography is somewhat limited, as you cannot perform complex tasks such as histogram stretching, advanced light pollution, and gradient removal, star reduction, etc.
On the other hand, it is a terrific editor for the final cosmetic tweaks to your image and to organize them in collections, per tag, and location. Lightroom is also great for color proofing your images before printing them.
If you are subscribing to the Adobe Photography Plan, you also have Photoshop CC included for free. And here is where things get interesting.
To get the best from the two worlds, load your stacked images in Lightroom, organize them in collections, and call Photoshop from within Lightroom for the astro-specific editing (histogram stretching, etc.).
Then make the final tweaks in Lightroom.
Adobe Photoshop CC
Generic Photo Editor | Commercial From $9.99 Subscription Plan | Windows, Mac OS X, IOS
Pros
- Versatile and Powerful Photo Editor / Image Manipulation Software
- Suitable for deep sky and planetary astrophotography as well as star trails and starry landscapes
- Astrophotography Action Sets and Plugins Available
- Subscription Plan with Photography Bundle
Cons
- Lacks Some Advanced Features for Astrophotography
Photoshop is one of the most commonly used software in the field of photography editing and image manipulation, and it can be used to post-process astrophotography work.
If you are a beginner astrophotographer, you are on a tight budget or you already own Photoshop, you should give it a try as all the basic post-processing steps can be performed in this software.
If you need more advanced features, you can also expand Photoshop capabilities thanks to many astrophotography related Action Sets, Plugins, and Panels.
Finally, with Camera Raw filter and other photographic plugins (like for smart sharpening and advance noise reduction), you can perform with ease all the final tweaks an image may need.
As a Photoshop user, I tried many plugins and action sets for astrophotography, and here is my must-have extensions list.
Astronomy Tools by ProDigital
Actions Pack For Deep Sky Astrophotography| Commercial $21.95 | Windows, Mac OS X
A rich set of actions suitable for post-processing astrophotography images. The set includes actions such as star reduction, enhanced DSO, light pollution and color gradient removal, sharpening, and noise reduction.
Photokemi’s Star Tools by Ken Mitchel
Actions Pack For Deep Sky Astrophotography | Commercial $14.95 | Windows, Mac OSX
Similarly to Astronomy Tools, this action set is most useful for deep space astrophotography.
It offers advanced star removal and star reducing actions, semi-automatic histogram stretching, different sharpening and noise reduction actions, as well as actions such as nebula filters and star color enhancement.
There is also a set of extra actions, available for $6.95.
GradientXterminator by Russell Croman
Plugin For Deep Sky Astrophotography | Commercial $49.95 | Windows, Mac OS X
This plugin is a gradient removal tool that is easy to use and extremely effective. Despite a rather steep price (a trial is available for you to test the plugin), this is a terrific add-on for Photoshop, if you are serious about deep-sky astrophotography.
Hasta La Vista Green! (HLVG) by Regelio Bernard Andreo
Plugin For Deep Sky Astrophotography | Donationware | Windows
Despite its old age, this plugin is still useful, and it does an excellent job of removing green noise and the green casts such noise may cause in some images.
Astro Panel By Angelo Perrone
Panel For Starry Landscape And Deep Sky Astrophotography | Commercial | Windows, Mac OS X
Astro Panel consists of a rich set of functions and methods that produce high quality starry landscapes and Milky Way images.
It is also easy to process Deep Sky Photos thanks to advanced functions for reducing digital noise and hot-pixels, eliminating the gradient, managing artificial flat, and much more …
Furthermore, astronomical images aside, you can use the Astro Panel to edit classic landscape images too.
Affinity Photo
Generic Photo Editor | Commercial $49.99 | Windows, Mac OS X, IOS ($19.99)
Pros
- Affordable
- Powerful
- The interface and commands are similar to Photoshop for an easy switch
- Suitable for deep sky and planetary astrophotography as well as star trails and starry landscapes
Cons
- Lacks third-party actions sets, plugins and panels
Affinity Photo from Serif Lab is a great, affordable alternative to Photoshop, and you do not need to pay for a subscription plan.
With Affinity Photo, you can carry out with ease all of the basic astrophotography post-processing.
But since there are no plugins, action sets, and panels to help you out, you have to learn to do things manually, even the more advanced tasks such as star reduction.
Gimp
Photo Editor | Freeware | Windows, Mac OS X, Linux
Pros
- Freeware
- Great community and lot of info available
- Powerful
- Suitable for deep sky and planetary astrophotography as well as star trails and starry landscapes
Cons
- Interface a bit confused
- Lacks third-party actions sets, plugins and panels
Gimp is the historical freeware alternative to Photoshop. Since it is freeware and on the market for many years, there is a big community of users, so it is easy to find relevant tutorials and guides to help you out.
The software has a slightly confusing interface, particularly if you are trying to switch from Photoshop, but it is powerful enough to let you edit your astrophotography images with ease.
Unfortunately, there are no third-party action sets, plugins, or panels to help you automate some tasks. As with Affinity Photo, you have to learn how to do everything manually.
Star Tools
Astrophotography Post-Processing Tools | Commercial $45 | Windows, Mac OS X, Linux
Pros
- Affordable
- Multiplatform
- Offers many advanced tools
- Trial without time limit
Cons
- Interface bit confusing
- Convoluted workflow
- Slower than other software
StarTools is a deep-sky post-process editor that does everything you need except the initial light frame calibration and stacking.
Once you have the stacked image from, say, Deep Sky Stacker, you can post-process it in StarTools, taking advantage of the many tools the software has to offer.
The interface is a bit confusing, and it may take a while to get used to the convoluted editing workflow.
Fortunately, the trial version never expires, so you can take all the time you need to experiment with StarTools before deciding if it is for you or not. The only limitation of the trial is that you cannot save your results.
Image Processing For Macbook Pro
SiriL
Multipurpose Astrophotography Editor | Freeware | Windows, Mac OS X, Linux
Pros
- Freeware
- Multiplatform
- Active Development
- Suitable for different kinds of astrophotography
- Fairly easy to use
- Powerful full-grown astrophotography software
Cons
- Develop the image is a lengthy process
- Interface a bit confused
I’m no expert with SiriL, but it is probably the only full-grown astrophotography editor that is freeware and multiplatform.
Siril will allow you to perform all the essential steps in your astrophotography editing workflow, from image calibration and stacking to (manual or auto) histogram stretching and post-processing.
Since it is free, if you are looking for an astrophotography package, SirilL is worth downloading and having a go with it.
Nebulosity
Deep Sky Astrophotography Editor | Commercial $95 | Windows, Mac OS X
Pros
- Capable full astrophotography editor
- Can calibrate and stack your images
- It offers many advanced tools
Cons
- Not abandonware, but development is somehow slow
- The interface feels old and not very user friendly
Nebulosity 4 was my first software specific to astrophotography. It is intended for deep sky astrophotography and is fairly easy to use.
It offers a good way to calibrate and stack your images, and you can use it for stretching the histogram, tighten the stars, calibrate the background colors, and perform sharpening and noise reduction.
But the interface is not as intuitive, it looks “old,” and while development is there, it is not as quick compared with other software.
Astro Pixel Processor
Deep Sky Astrophotography Editor | Commercial €60/Yr (Renter’s License) Or €150 (Owner’s Renter) | Windows, Mac OS X, Linux
Pro
- Great deep sky astrophotography package
- Powerful
- Easy to use
- Batch processing
- 30-days free trial available
- Suitable for creating stunning mosaic with ease
- Active development
- Rental license available
Cons
- Vignetting removal tool could be better
- No Stars Reduction methods available
Astro Pixel Processor is my goto software for my deep sky astrophotography and I decided to go with the renter’s license to always work with the latest version of the software.
The interface is easy to navigate, options are explained by text messages that appear when you hover on the options with the mouse, and the different tabs are numbered.
This means that there is no guessing in establishing the best workflow: just follow the numbers from 1 to 6 and jump at the tab number 9 for post-processing the stacked image.
You can run all the steps once at a time or set them up and run all with a batch processing: this way, you can do other stuff while the software calibrates and stacks your images.
If you are looking for a way to edit your deep-sky images and create mosaics, I vouch for Astro Pixel Processor.
PixInsight
Multipurpose Astrophotography Editor | Commercial €230+VAT | Windows, Mac OS X, Linux
Pros
- The best and most complete astrophotography editor on the market
- Multiplatform
- Suitable for Planetary and Deep-Sky astrophotography
- 45-days free trial available
Cons
- Expensive
- Extremely steep learning curve
- Requires a powerful computer to run smoothly and conveniently fast
I will be honest with you: I requested a trial (and it was granted twice), but both times I ran away from PixInsight screaming in despair.
Not that PixInsight is bad or lacks crucial functions, but because it is very complicated to use for beginners and the learning curve is very steep.
Granted, PixInsight, being the software of refinement for the category, there are tons of tutorials and guides online (Light Vortex Astronomy has some of the best ones and are free). But you need to spend a lot of time in front of your computer, particularly if you have an old one.
But if you can master it, you will be rewarded with Pro-grade deep sky astrophotography images.
A Comprehensive Video About Post-Processing
In this video, I show you how to post-process a deep sky image using some of the software discussed in this article.
While it is not a complete tutorial in post-processing deep sky images, it gives you a feeling of how easy (or not) is to use those software and where they differ.
Conclusions
Stacking astrophotography images is only the first step in the lengthy astrophotography editing process. In this article, we have discussed the different software that is available to post-process the stacked image to obtain a compelling image of the night sky.
Some are free, some are commercial, some are specific to deep sky astrophotography while others are generic photography editors, and they all have their pros and cons.
This guide will help you to decide which software is best for you.
Personally, I am a fan of Astro Pixel Processor for deep sky astrophotography, as it is powerful and easy to use, and of Photoshop for its flexibility.