#"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""# # Total script NORSCRIPT # #"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""# import pyfits, scipy, pylab, sys, math, os import scipy.optimize from pylab import * from scipy.interpolate import UnivariateSpline from datetime import datetime startTime = datetime.now() ###################################################Importing datacube & template######################################### dcube = raw_input('Please select the datacube, type \'ovrv\' for program overview, or \'help\' for help: ') ############Overview########### while dcube == 'ovrv': print 'This program takes a user defined datacube of a galaxy, and computes its redshifts and FWHMs at each spexel by cross-correlating the spectrum at that spexel with a user-defined template. When defining desired .fits files, it is important to add the .fits extention to the end of the filename (eg. \'Please select the datacube: NGC5170.fits\').\n' print 'The user is asked to define the cosmic ray cutoff values for the galaxy and the template. The galaxy cutoff can be determined by finding the i and j coordinates for one of the nucleus spexels, and plotting its spectrum, to see what the count rate is. The template spectrum is plotted before prompting for the cutoff value, so the user can decide on the cosmic ray cutoff value.\n' print 'When the redshift values and FWHMs for each spexel are computed, the program displays a histogram of values, for easier selection of higher and lower limits of the colour map. The redshift histogram spans from 210kms(10px) to 8400kms(400px), to encapsulate realistic redshift velocities, and the FWHM histogram spans between 0 and 1000kms(50px), to encapsulate realistic velocity variations. The FWHM histogram sometimes has two peaks - in this case the limits of the peak situated at higher velocity should be taken. This is due to the null result redshifts also having a definite FWHM, which is lower than the useful FWHMs.\n' print 'There are three main outputs from the program, the redshift, FWHM and brightness plots. At the end of execution, the values for each of these plots are saved in .txt files(eg. NGC5170_offsets.txt, NGC5170_FWHMs.txt and NGC5170_brightness.txt), which will be located in the same directory as the one the script is run from. \n' print 'If you require help at any stage of the program, type \'help\'' dcube = raw_input('Please select the datacube now: ') while dcube == 'help': print 'Type the name of the galaxy datacube you would like to plot. It is important to include the .fits extention at the end of the file name (eg. \'Please select the datacube: NGC5170.fits\'. NB. The datacube .fits file must be located in the same directory as the script file! If you require help at any stage, type \'help\'.' dcube = raw_input('Please select the datacube now: ') template = raw_input('Please select the template: ') while template == 'help': print 'Type the name of the template you would like to use for cross-correlation. It is important to include the .fits extention at the end of the file name. NB. The template must be located in the same directory as the running script!' template = raw_input('Please select the template: ') ####Headers dcubeheader = pyfits.getheader('%(dcube)s'%vars()) tempheader = pyfits.getheader('%(template)s'%vars()) objname = dcubeheader['OBJECT'] axis1 = dcubeheader['NAXIS1'] axis2 = dcubeheader['NAXIS2'] #Dimension reading axis3 = dcubeheader['NAXIS3'] dcubestep = dcubeheader['cdelt3'] tempstep = tempheader['CD3_3'] #step reading and comparison if scipy.absolute(dcubestep-tempstep) > 1e-4: print 'The difference between steps is',scipy.absolute(dcubestep-tempstep),'which is greater than the threshold, 1e-4' sys.exit(1) ###Data y = pyfits.getdata('%(dcube)s'%vars()) y1 = y[:,0,0] y1axis = scipy.linspace(0, len(y1)-1,num=len(y1)) y2 = pyfits.getdata('%(template)s'%vars()) y2axis = scipy.linspace(0,len(y2)-1,num=len(y2)) ##selecting the mode mode = raw_input('Would you like the output in kms(kms) or pixels(px)? ') modes = ['kms','px','help'] while mode not in modes: mode = raw_input('Please select a valid mode(kms/px): ') #avoids program exit in case of misspelling while mode == 'help': print 'Choose the units you would like to have your final plots in.' mode = raw_input('Would you like the output in kms(kms) or pixels(px)? ') ###########checking directory existence, for saving the plots, arrays########### def ensure_dir(f): d = os.path.dirname(f) if not os.path.exists(d): os.mkdir(d) ensure_dir('%(objname)s/'%vars()) os.chdir('%(objname)s/'%vars()) #########################################################Cosmic ray processing#################################################### cutoff = raw_input('Datacube cosmic ray cutoff: ') while cutoff == 'help': print 'If unsure about cutoff counts, select around 800. If the final map of the galaxy shows anomalies, use the brightness plot to find coordinates of the nucleus. Then plot the found spexel(of the CR-processed datacube) with attached \'spectrum.py\', and determine the cutoff value.' cutoff = raw_input('Datacube cosmic ray cutoff: ') pylab.plot(y2axis,y2) pylab.title('Help with selecting the template cutoff') pylab.show() tempcutoff = raw_input('Template cosmic ray cutoff: ') while tempcutoff == 'help': print 'Select a cutoff that does not exclude any good data as can be seen from the plot.' tempcutoff = raw_input('Template cosmic ray cutoff: ') ###Datacube print 'Datacube CR processing started.' for i in scipy.arange(1,len(y1)-1,1): #range(len(y1)) for j in range(axis2): for k in range(axis1): if y[i,j,k] > cutoff: #setting what happens above cutoff y[i,j,k] = cutoff else: if j > 0 and j < axis2 and k > 0 and k < axis1: a = [y[i-1:i+2,j-1:j+2,k-1:k+2]] #3x3x3 filter (for now) y[i,j,k] = scipy.median(a) elif j == 0 and k == 0:#top left corner a = [y[i-1:i+2,j:j+2,k:k+2]] y[i,j,k] = scipy.median(a) elif j == (axis2-1) and k == 0:#bottom left corner a = [y[i-1:i+2,j-1:j+1,k:k+2]] y[i,j,k] = scipy.median(a) elif k == 0 and j < (axis2-1): a = [y[i-1:i+2,j-1:j+2,k:k+2]]#left hand side y[i,j,k] = scipy.median(a) elif j == (axis2-1) and k == (axis1-1):#bottom right corner a = [y[i-1:i+2,j-1:j+1,k-1:k+1]] y[i,j,k] = scipy.median(a) elif j == 0 and k == (axis1-1):#top right corner a = [y[i-1:i+2,j:j+2,k-1:k+1]] y[i,j,k] = scipy.median(a) elif j == (axis2-1) and k < (axis1-1):#bottom side a = [y[i-1:i+2,j-1:j,k-1:k+2]] y[i,j,k] == scipy.median(a) else: y[i,j,k] = y[i,j,k] val = len(y1)/10 if i%val == 0 and j == 1 and k == 1: print 'Cosmic ray processing is',(i/val)*10,'% done.' scipy.save('%(objname)s_processedDcube'%vars(), y) if mode == 'kms': ensure_dir('kms/'%vars()) os.chdir('kms/'%vars()) else: ensure_dir('pxl/'%vars()) os.chdir('pxl/'%vars()) ###Template for i in range(len(y2)): if y2[i] > tempcutoff: y2[i] = tempcutoff #setting what happens above cutoff else: if i <= 2: y2[i] = scipy.median(y2[i:i+6]) elif i >= len(y2)-3: y2[i] = scipy.median(y2[i-6:i]) #setting what happens below cutoff else: y2[i] = scipy.median(y2[i-3:i+3]) print 'Cosmic ray processing finished.',(datetime.now()-startTime) #####################################Template continuum removal, smoothing, normalisation, endmasking############################## array2 = scipy.linspace(0,len(y2)-1,num = len(y2)) #makes an array for template to be smoothed narray2 = scipy.linspace(0,len(y2)-1,num = len(y2)) #makes an array with which template is to be normalised ####Smoothing l = 0 while l < len(y2): if l < 100: array2[l] = scipy.mean(y2[l:l+100]) elif l > len(y2)-100: array2[l] = scipy.mean(y2[l-100:l]) else: array2[l] = scipy.mean(y2[(l-100):(l+100)]) l += 1 #Continuum removal z2 = y2 - array2 ####Normalising mi = 0 while mi < len(y2): if mi < 100: narray2[mi] = scipy.std(y2[mi:mi+100]) elif mi > len (y2)-100: narray2[mi] = scipy.std(y2[mi-100:mi]) else: narray2[mi] = scipy.std(y2[mi-100:mi+100]) mi += 1 for index in scipy.arange(0,len(narray2),1): #due to divide by zero error in the normalisation part if narray2[index] == 0: narray2[index] = 1000000 z2 = z2 / narray2 ###Endmasking for end in range(len(z2)-100,len(z2)): z2[end] = z2[end]*(scipy.sin((scipy.pi/2)*(len(z2)-end)/100)) #end of template spectrum for end2 in range(100): z2[end2] = z2[end2]*(scipy.sin((scipy.pi/2)*(end2)/100)) #start of template spectrum(first 100 pixels) ######################################Datacube continuum removal, smoothing, normalisation, endmasking##################################### scalconst = (dcubestep/5000)*3e5 #converts to kms array1 = scipy.linspace(0,len(y1)-1,num = len(y1)) #makes an array for each datacube vector to be smoothed narray1 = scipy.linspace(0,len(y1)-1,num = len(y1)) #makes an array with which z1 is to be normalised matrix = scipy.zeros((axis2,axis1)) # creates a matrix for offsets fwhmatrix = scipy.zeros((axis2,axis1)) # creates a matrix for FWHMs simple = scipy.zeros((axis2,axis1)) # creates a matrix for sum of counts for i in range(axis2): for j in range(axis1): y1 = y[:,i,j] ################Smoothing############### k=0 while k < len(y1): if k < 100: array1[k] = scipy.mean(y1[k:k+100]) elif k > len(y1)-100: array1[k] = scipy.mean(y1[k-100:k]) else: array1[k] = scipy.mean(y1[(k-100):(k+100)]) k += 1 #continuum removal z1 = y1 - array1 ###############Normalising############## ni = 0 while ni < len(y1): if ni < 100: narray1[ni] = scipy.std(y1[ni:ni+100]) elif ni > len (y1)-100: narray1[ni] = scipy.std(y1[ni-100:ni]) else: narray1[ni] = scipy.std(y1[ni-100:ni+100]) ni += 1 for index in scipy.arange(0,len(narray1),1): #due to divide by zero error in the normalisation part if narray1[index] == 0: narray1[index] = 1000000 z1 = z1 / narray1 ##############Endmasking############### for end in range(len(z1)-100,len(z1)): z1[end] = z1[end]*(scipy.sin((scipy.pi/2)*(len(z1)-end)/100)) #end of image spectrum for end2 in range(100): #scipy.arange(0,100,1) z1[end2] = z1[end2]*(scipy.sin((scipy.pi/2)*(end2)/100)) #start of image spectrum(first 100 pixels) ############Cross correlating########### #compute the cross-correlation between y1 and y2 ycorr = scipy.correlate(z2, z1, mode='full') #####this is the important function##### xcorr = scipy.linspace(0, len(ycorr)-1, num=len(ycorr)) #makes an x array to match up with the results of ycorr distancePerLag = 1 #1 because y1 will change from 0 to len(y1)-1 in increments of 1 #define a gaussian fitting function with p[0] = amplitude, p[1]=mean, p[2]=sigma fitfunc = lambda p,x: p[0]*scipy.exp(-(x-p[1])**2/(2.0*p[2]**2)) errfunc = lambda p,x,z: fitfunc(p,x)-z #guessing fit parameters p0 = scipy.c_[max(ycorr),scipy.where(ycorr==max(ycorr))[0],5] #fit a gaussian to the correlation function p1, success = scipy.optimize.leastsq(errfunc, p0.copy()[0],args=(xcorr,ycorr)) #compute the best fit function from the best fit parameters corrfit = fitfunc(p1, xcorr) #get the mean of the cross-correlation xcorrMean = p1[1] #index converted to lag steps nLags = xcorrMean-(len(y1)-1) #compute the resulting offset, and distancePerLag offsetComputed = -nLags*distancePerLag ##finding the FWHM #create a spline of x and blue-np.max(blue)/2 spline = UnivariateSpline(xcorr, corrfit-scipy.amax(corrfit)/2, s=0) r1, r2 = spline.roots() #find the roots FWHM = scipy.absolute(r1-r2) #find the full width half maximum matrix[i][j] = offsetComputed fwhmatrix[i][j] = FWHM simple[i][j] = scipy.sum(y1) if math.isnan(simple[i][j]) or simple[i][j]<0: simple[i][j] = 0 else: simple[i][j] = simple[i][j] if mode == 'kms': matrix[i][j] = offsetComputed*scalconst fwhmatrix[i][j] = FWHM*scalconst else: matrix[i][j] = offsetComputed fwhmatrix[i][j] = FWHM simple[i][j] = scipy.sum(y1) #for plotting only the brightness if math.isnan(simple[i][j]) or simple[i][j]<0: simple[i][j] = 0 else: simple[i][j] = simple[i][j] if j == 0: print ('Processing row %(i)s / %(axis2)s'%vars()) scipy.save('%(objname)s_redshifts'%vars(), matrix) scipy.save('%(objname)s_FWHMs'%vars(), fwhmatrix) scipy.save('%(objname)s_brightness'%vars(), simple) print '\a' #Plays a beep sound(Histograms need to be selected) ########################################################Histograms for scale selection#################################################### print 'Select the lower and upper limits for each of the three plots, by selecting which data is not as relevant from the histogram. The FWHM histogram sometimes has two peaks - in this case the limits of the peak situated at higher velocity should be taken. This is due to the null result redshifts also having a definite FWHM, which is lower than the useful FWHMs. The brightness map needs adjusting due to a small number of very hot pixels.' if mode == 'kms': pylab.hist(matrix,bins = 40,range = [10*scalconst,400*scalconst]) #Histogram of offset values, for selection of upper and lower limits of the velocity plot pylab.title('Velocity values in kms') pylab.show() else: pylab.hist(matrix,bins = 40,range = [10,400]) pylab.title('Velocity values in pixels') pylab.show() vMin = input('Select the lower limit: ') vMax = input('Select the upper limmit: ') while vMax < vMin: print 'Lower limit must be lower than higher limit' vMin = input('Select the lower limit again') vMax = input('Select upper limit again:') if mode == 'kms': pylab.hist(fwhmatrix,bins = 40,range = [0*scalconst,50*scalconst]) #Histogram of offset values pylab.title('Offset values in kms') pylab.show() else: pylab.hist(fwhmatrix,bins = 40,range = [0,50]) pylab.title('Offset values in pixels') pylab.show() minfwhm = input('Select the lower FWHM limit: ') maxfwhm = input('Select the higher FWHM limit: ') while maxfwhm < minfwhm or minfwhm == 'help' or maxfwhm == 'help': print 'Lower FWHM limit must be lower than higher limit' minfwhm = input('Select lower FWHM limit again:') maxfwhm = input('Select upper FWHM limit again:') pylab.hist(simple,bins = 40) #Histogram of brightness values pylab.title('Brightness counts') pylab.show() minbright = input('Select the lower brightness cutoff: ') maxbright = input('Select the upper brightness cutoff: ') while minbright > maxbright: print 'Lower brightness value must be smaller than upper' minbright = input('Select the lower brightness cutoff again: ') maxbright = input('Select the upper brightness cutoff again: ') print('%(objname)s offsets array saved as %(objname)s_offsets.txt, FWHM saved as %(objname)s_FWHMs.txt, brightness saved as %(objname)s_brightness.txt'%vars()) print 'Datacube dimensions:', axis1, 'x', axis2, 'x', axis3 print 'The time taken to run the script:',(datetime.now()-startTime) ##########################################################Plotting the offsets and FWHM######################################################### if axis2 == 76: ylabel = 'Scale: 1px = 0.5arcsec' else: ylabel = 'Scale: 1px = 1arcsec' xlabel = 'Scale: 1px = 1arcsec' pylab.figure(0) pylab.title('%(objname)s redshifts'%vars()) pylab.subplot(111).text(0,0,'Step size = %(dcubestep)s$\AA$'%vars(), color = 'black', fontsize = 17) pylab.pcolormesh(matrix,vmin=vMin,vmax=vMax) pylab.xlabel('%(xlabel)s'%vars()) pylab.ylabel('%(ylabel)s'%vars()) #Plotting the offset values pylab.colorbar() pylab.savefig('%(objname)s_redshifts.png'%vars()) pylab.figure(1) pylab.title('%(objname)s FWHM values'%vars()) pylab.subplot(111).text(0,0,'Step size = %(dcubestep)s$\AA$'%vars(), color = 'blue', fontsize = 17) pylab.pcolormesh(fwhmatrix,cmap = plt.cm.hot, vmin = minfwhm,vmax = maxfwhm) #Plotting the FWHM's pylab.xlabel('%(xlabel)s'%vars()) pylab.ylabel('%(ylabel)s'%vars()) pylab.colorbar() pylab.savefig('%(objname)s_FWHMs.png'%vars()) pylab.figure(2) pylab.title('%(objname)s brightness plot'%vars()) #Plotting the object brightness pylab.pcolormesh(simple,cmap = plt.cm.gray, vmin = minbright, vmax = maxbright) pylab.colorbar() pylab.savefig('%(objname)s_brightness.png'%vars()) pylab.show()