Jump to content

PixInsight Thread


TerryMcK
 Share

Recommended Posts

I bought PixInsight recently and have been trying to get to grips with it. As many of you probably already know It is not easy needless to say with a very steep learning curve.

I purchased the Warren Keller tome "Inside PixInsight" Second Edition and have been using that as a little light bedtime reading!

Then looking around t'internet I discovered numerous resources one of which is mastersofpixinsight.com. This website is run by Warren, Dr. Ron Brecher and Pete Proulx. A very informative website. They have been running live streamed courses of which I attended the first two "Pixinsight for Newbies" and "Next Steps for Newbies". These were 2 hour courses where you got tuition from Warren and Ron with Pete doing the hosting.

I learnt so much from those 2 x two hour courses that I have signed up for the next course "Dialling Down The Noise" in November. The courses are a very reasonable $35 a session which is about £28 and I think is great value for money for software that costs so much! I will probably pay for the next up and coming courses too when they are announced.

You get data from each course and access to the live sessions, which they record for posterity, for 3 months afterwards.

 

So I thought I would start this thread as somewhere we can put PI tricks, tips, experiences etc.

 

 

  • Like 1
Link to comment
Share on other sites

I use a fraction of what PI can do and I am eager to learn more.  I like to understand why and how things work so I don't like following recipes - which is what many of the online resources are (and I've discovered some of them are plain wrong!)   I'm not actually sure that PI has a steeper learning curve than anything else - it just feels that way because bits of the UI are very idiosyncratic and the help files are - in many places - frankly laughable.

That's a shame because it is undoubtable a very capable piece of software.

So looking forward to seeing hints and tips on this thread.  If I can gather the energy I may post my version of Jon Rista's deconvolution workflow that works very well for me.

  • Like 2
Link to comment
Share on other sites

One trick I tried on my recent Pacman was after doing a linear fit on the separate channels and a little noise reduction. I then used Starnet on each channel and a combination of Convolution and Deconvolution on the stars and nebula separately. I masked off relevant structures first.

Then after combining the channels there was just some small tweaks in colour and a little chrominance noise reduction, Dark Structure Enhance.....

  • Like 1
Link to comment
Share on other sites

I'm going over some of my older mono data with PixInsight learning the processes and comparing the results with APP/Photoshop

At the moment I know I am at the bottom of a very steep curve. I note it takes a lot longer to stack, in comparison to APP, despite my machine being a powerful multi core XEON workstation with 64GB RAM and SSD disks.

That said I can see it is doing a lot using the Weighted BPP script to calibrate/register/register and integrate which seems very comprehensive.

Link to comment
Share on other sites

I couldn't get rid of a granular background which was predominantly green blobs. So I've gone back to APP to do the integration while I get used to the post processing.

  • I used PI to combine the channels in this image HSO (RGB)
  • Then cropped the edges with DynCrop
  • Next was DBE which did actually work when I left it to its own devices and didn't twiddle with it.
  • Next was phoitmetric colour callibration on the stars using the GAII database - very cool
  • Multilinear Noise transformation was next with these parameters
image.png.6183d0be1b027144af30467c17d7394b.png
  • After doing a histogram transformation I masked off the nebula and stars to reduce the background a bit with curves.
  • Next was a double local histogram transformation
  • A final removal of green in the background and then I applied an ICC SRGB colour profile to the image

Phew

Now I converted it to jpg (in photoshop as it happens) and hopefully it will show up here

https://imgur.com/a/djEzG1n

 

  • Like 2
Link to comment
Share on other sites

  • 4 weeks later...

It has been a while but I was on another Pixinsight course with Warren Keller, Dr. Ron Brecher and Pete Proulx last night. This was called "Dialling down the noise"

Use of tools such as MLT Denoise before stretching and TGV Denoise after stretching were talked about. However one of the great denoise tools is called MURE Denoise. This is normally applied to monochrome images before any stretching. So the image is still linear. 

If you have an RGB image and a Lum layer you would apply it to the Lum image as that contains the detail. If you are doing narrowband, as I do, you can apply it to any of the stacked images. For instance you may have a stack of Ha. Then you would apply it to the final stacked Ha image. 

Of course there is no substitute for having clear skies but if, like me, you live in an area where imaging is challenging it is great. For instance the target is just about to set behind the neighbours tree, or the clouds are about to come in, then you have limited time to image. This results in lack of data and more electronic noise in the image. 

So I decided to try out MURE denoise on a target I shot in September and the results were astonishing. The noise that was previously in the finished image was gone or very much not apparent.

 

MURE is in Scripts>Noise Reduction

  • The process to do this is first of all set the MURE settings and execute it in global context.
  • Next load in two uncalibrated flat frames and then two dark frames. Straight out of the camera
  • Leave Offset at Zero Offset despite whatever offset you had on your original capture.
image.png.52e3cd91dd10c2acc02448b9106f1429.png

Click estimate and the two numbers you are interested in are Gain and Gaussian noise. Make a note of these with good old fashioned paper and pencil. Unfortunately there is no way to transfer them to the next step but hey ho. Click Dismiss as you are done with the setting now.

  • Next run Mure Denoise an execute it in the global context again.
  • Select the image you want to work on
  • Make sure the combination count is 75 and the interpolation method is Lanczos-3
  • In the detector section enter the Gain and Gaussian noise numbers you recorded.
image.png.724450b4651c06db24372abf171bb7a4.png

Drag the blue triangle it becomes in my example Process28 to the workspace and click dismiss.

image.png.5730479e12826d34829006d57cb3c8ca.png

The next bit is time consuming so work on a preview first. Try to make the preview have background and some good data.

Drag the process icon onto the preview and look at the change. If you want to see what it was like before drag the preview onto the image task bar (left hand side). Simple click between the two preview tabs and you see a before and after. If you want to make any alterations then reopen the process icon and change the variance scale. Ron said is was best to go no lower than 0.9. But of course you are free to experiment.

For Narrowband do the same to the other images, OIII and SII for example

Then carry on with your normal workflow.

Mine is the following

  • Channel Combination
  • Dynamic Crop
  • Dynamic Background Extraction
  • Photometric Colour Calibration based upon the image parameters recorded in the FITS header. This makes sure that plate solving works ok. Then you get the stars looking the right colour.
  • MultiScale Linear Transform is next with 5 layers
  • Histogram transform based upon the Screen transfer function
  • Convert to RGB Working space
  • TGV Denoise
  • etc.
Image13.jpg.f794920246f4aa8fc6aafd47cdb49577.jpg

NGC6995 from the backyard in SHO

https://imgur.com/a/9jGfF0S

  • Like 2
Link to comment
Share on other sites

Posted by: @TerryMcK

Then carry on with your normal workflow.

Mine is the following

  • Channel Combination
  • Dynamic Crop
  • Dynamic Background Extraction
  • Photometric Colour Calibration based upon the image parameters recorded in the FITS header. This makes sure that plate solving works ok. Then you get the stars looking the right colour.
  • MultiScale Linear Transform is next with 5 layers
  • Histogram transform based upon the Screen transfer function
  • Convert to RGB Working space
  • TGV Denoise
  • etc.

My workflow for narrowband is somewhat similar, though not in the precise order you have them.

Dynamic crop

DBE

Linear fit

Channel combination

Photometric Colour Calibration

Background neutralization

MultiScale Linear Transform

For stretching. I will use one of three methods to determine the best result.

- A combination of STF and Histogram transformation

- A combination of Masked Stretch and Histogram transformation

- A combination of ArcsinhStretch and Histogram transformation

Link to comment
Share on other sites

  • 2 weeks later...

Hi, I wonder if I can ask some assistance, Brian @AstronomyUkraine has been most helpful, but still struggling, I have posted this on the PI forum, but would appreciate any thought..

I have taken about 10 hours on  Sh2-185 - The Ghost of Cassiopeia.

Generally my workflow is: -

WBPP or Manually combine, depending on which needs more attention if spanned over multiple night.
Dynamic Crop
I do use deconvolution, but need more practice.
DBE on each Channel
Channel Combination
Colour Calibration or Photometric Colour Calibration or Autocolour
BGN if I am having issues but avoid as much as I can
MLTNR
Histogram Stretch, or Masked Stretch, but lately been using EZ Soft Stretch as that is kinder to the image.
TVGNR
BGN to tweak
Exponential Transformation
Curves and Histogram adjustments to correct and balance the colour
HDRMultiscaleTransormation
Sometimes UnsharpMask

I do similar with the RGB channel for the stars and then use convolution on the stars and then Pixel Maths to apply

All the way through I am using a variety of Masks and gentle use of SCNR.

Occasionally I will use other tools like Starnet, but that is generally my workflow.

My problem is that on first combining in SHO pallet I get this: -

&hash=564a17dfe177c6789ceaf17a407c3ed7

If I extract a Lum from the above, all seems fine: -

&hash=564a17dfe177c6789ceaf17a407c3ed7

So all looks fine so far.

If I then apply the mask by dragging onto the colour image I get this: -

&hash=564a17dfe177c6789ceaf17a407c3ed7

If I then invert the mask I get this: -

&hash=564a17dfe177c6789ceaf17a407c3ed7

Whether I show or hide the mask the image is barely discernable, so I am getting a bit confused as one of the most important things to my precessing is the use of masks.

As you can see, I need to reduce the green significantly, but it is proving for me impossible.

The finished image in HOS pallet is here: -

https://www.backyardastro.org/community/deep-sky-imaging/sh2-185-the-ghost-of-cassiopeia-your-friendly-ghost/

I've uploaded the master files so that if anyone would like to see what I am doing then they are welcome to show me the errors of my ways

https://drive.google.com/file/d/1gaBFYiulIHWZkcTVmGZEU48dSl2gpcGl/view?usp=sharing

I have tried colour Masks, but am still a little inexperienced on its use.

Any help would be most appreciated.

Thank-you.

Link to comment
Share on other sites

Posted by: @Jkulin

I have tried colour Masks, but am still a little inexperienced on its use.

Any help would be most appreciated.

With colour masks, you are protecting everything, except the colour you want to adjust. For instance, if you have magenta in the background, selecting a magenta colour mask, will mask everything except the magenta, allowing you to work on it in curves. The only two settings you need to worry about, are making sure you have the correct image selected, and the blur strength is adjusted, a setting of 2 or 3 is usually sufficient. Sometimes, colour mask will produce a totally black mask, meaning that colour is not present in the image.

This image is the magenta colour mask associated with your image. As you can see, all the nebula is masked off, leaving you free to work on the background. Another advantage of this mask, you can invert it, and work on everything except the background.

MagMask.thumb.jpg.a3e3e4d4f1cde1872cc6ccfa382a4dd6.jpg
  • Like 1
Link to comment
Share on other sites

A QUICK GUIDE TO CALIBRATING, STACKING, DARKS, FLAT DARKS AND FLATS IN PIXINSIGHT

This is the basic workflow I use to aquire the master flats and master darks in pixinsight, in preparation for calibrating the lights, using the manual method. This guide is for narrowband mono images. Although the technique is more or less the same for OSC cameras, except I use flat darks instead of bias frames.

First up is stacking the dark frames to make a master dark that mirrors the light frames in gain and exposure length. As the two images below show, it is a straightforward operation. These settings will also be applied when stacking the flat dark frames.

Important! Uncheck Evaluate Noise when stacking.

ImageIntegration1.thumb.jpg.a0bcdce01731ecac9daa0d524c0d7095.jpg

You have a choice of Rejection Algorithm when stacking darks, or flat darks. I usually use Winsorized Sigma Clipping, never had any problems with it. Once stacked, you will have an assortment of master darks, and flat dark stacks which correspond to the lights and flats you have taken.

ImageIntegration2.jpg.4cb9877af5fe893adf35e59b1e1bc21b.jpg

Calibrating flats is the next operation. This is another straightforward operation, the only thing you need to make sure of, is your flat darks, correspond to the exposure of your flat frames. With narrowband you will have 3 sets of flats to calibrate. As you see from the image below, the only two settings you need to touch, are the destination folder, and choosing the correct flat dark stack.

Cal.thumb.jpg.87e8e2827742ef94224d1bfd422fe0a8.jpg

Once you have all your flats calibrated, it's time to stack them. Here the settings are a little different from stacking the dark frames.

Normalization is set to Muliplicative in the top image.

StackFlat.thumb.jpg.c23f812033073b1bef5029432241e8d6.jpg

In Pixel Rejection (1), Normalization is set to Equalize Fluxes.

StackFlat2.jpg.7bb555ed42629813533b64bea624afbb.jpg

Once this process is complete, you will have 3 Master Flat files, ready to use with your Master Darks, to calibrate your lights. Don't forget to save all Master frames in the xisf format, or Pixinsight will shout at you.

 

 

  • Like 1
Link to comment
Share on other sites

TGVDenoise v Multiscale Linear Transform.

This is a comparison between TGVDenoise and Multiscale Linear Transform in Pixinsight, using a new technique shown on Shawn's youtube channel at Visible Dark last night.

The two images show the results of applying TGVDenoise to one image, and MultiScale Linear Transform to the other. Both images had background extraction, and Photometric colour calibration applied. The top image had Multiscalelinear Transform applied before stretching. The second image had TGVDenoise applied after stretching, and no Multiscale Linear transform.

SHO_MSLT.thumb.jpg.7cdccd993ec8d9f74aec3896f50de523.jpg

MultiscaleLinear Transform

SHO_TGV.thumb.jpg.4e5dcd76087f9efd3d4f8971f0598eb0.jpg

TGVDenoise

There is a noticeable difference in the noise levels of the two images. The one using this new method shown by Shawn certainly reduced the noise more, but I had to make a few adjustments to stop the image looking too plastic, but overall the technique looks great.

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...
  • 2 weeks later...
  • TerryMcK changed the title to PixInsight Thread
  • TerryMcK pinned this topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...