I am trying to plot the surface magnitude of a test galaxy using GalSim. Made a test using the available examples on git. The flux values I got from spextra, simulating a 'sc' type galaxy with a V band magnitude of 5.72:
object_sp = Spextrum.from_file("sc.fits").scale_to_magnitude(amplitude=5.72 * u.mag,filter_curve="johnson_V.dat")
object_flux = object_sp.get_flux(filter_curve="johnson_V.dat").to(u.W / (u.m**2 * u.nm))
I computed the counts, photons/second using this functions:
def PhotonEnergy(band):
h = const.h
c = const.c
wavelen = band * u.nm #band_selection(band)[0]*u.nm # nm
wavelen_m = wavelen.to(u.meter)
E = (h * c) / wavelen_m
E_convert = E.to(u.watt * u.s)
return E_convert
def counts(flux, photonE, filterBandwidth,aperture,pixelSurface, QE):
targetPhotons = QE * flux/photonE * filterBandwidth * aperture * pixelSurface
return targetPhotons
def effectiveAperture(miror1,miror2,aperture,CO):
Total_losses = miror1 * miror2 # losses from mirrors
return aperture**2 * (np.pi/4) * Total_losses * u.m**2
Parameters for computing the counts
# the photon energy in V band
pe = PhotonEnergy(525)
# V band bandwidth
filterBandwidth = 80.7 * u.nm
# the quantum efficiency of the sensor
QE = 0.8
# the effective aperture (Area) of the telescope
aperture = effectiveAperture(0.98,0.98,0.203)
# the pixel surface is image scale^2, where image scale is arc seconds/pixel
pixelSurface = 0.92**2
object_counts = counts(object_flux, pe, filterBandwidth, aperture, pixelSurface, QE)
Result:
829987.96 counts/s
Now we make a model galaxy using GalSim:
def galaxy(bulge_n, bulge_re, disk_n, disk_r0, gal_flux, bulge_frac, gal_q, gal_beta):
bulge = galsim.Sersic(n=bulge_n, half_light_radius=bulge_re)
disk = galsim.Sersic(n=disk_n, scale_radius=disk_r0)
# Create an untruncated profile
untruncated_gal = bulge_frac * bulge + (1 - bulge_frac) * disk
untruncated_gal = untruncated_gal.withFlux(gal_flux)
# Truncate the profile using an exponential function
truncation_radius = 20.0 # Adjust
truncation = galsim.Exponential(half_light_radius=truncation_radius)
gal = galsim.Convolve([untruncated_gal, truncation])
# Set the shape of the galaxy according to axis ratio and position angle
gal_shape = galsim.Shear(q=gal_q, beta=gal_beta * galsim.degrees)
gal = gal.shear(gal_shape)
return gal
def plot_surface_brightness_profile(galaxy):
# Define radii for surface brightness profile calculation
radii = np.linspace(0, 2 * 10, 1000)
# Calculate surface brightness at each radius
surface_brightness = [galaxy.xValue(galsim.PositionD(r, 0)) for r in radii]
# Plot the surface brightness profile as a function of radius
plt.plot(radii, surface_brightness)
plt.xlabel('Radius (arcseconds)')
plt.ylabel('Surface Brightness')
plt.title('Surface Brightness Profile of Galaxy')
plt.show()
# Example parameters
bulge_n = 2
bulge_re = 2.5
disk_n = 1
disk_r0 = 25
gal_flux = object_counts.value
bulge_frac = 0.2
gal_q = 0.4
gal_beta = 30.0 # in degrees
# Create galaxy with the specified parameters
test_galaxy = galaxy(bulge_n, bulge_re, disk_n, disk_r0, gal_flux, bulge_frac, gal_q, gal_beta)
# Save the fits image
image_size = 200.0
image = galsim.ImageF(int(image_size), int(image_size))
test_galaxy.drawImage(image)
image.write('galaxy_image.fits')
# Plot the surface brightness profile of the galaxy
plot_surface_brightness_profile(test_galaxy)
The resulting plot has on the Y axis values in counts. I would expect, ideally, values in magnitudes. Is it possible? Also I noticed that the galsim.Sersic class expects the flux in photons/cm^2/s. Where does the cm^2 come from? Why not photons/s?
In the end I would like a plot with magnitudes on the Y axis not counts
The current code generates this plot: