
Colour is an open-source Python package providing a comprehensive number of algorithms and datasets for colour science.
It is freely available under the New BSD License terms.
Colour is an affiliated project of NumFOCUS, a 501(c)(3) nonprofit in the United States.
1 Draft Release Notes#
The draft release notes from the develop branch are available at this url.
2 Sponsors#
We are grateful for the support of our sponsors. If you’d like to join them, please consider becoming a sponsor on OpenCollective.
3 Features#
Most of the objects are available from the colour
namespace:
>>> import colour
3.1 Automatic Colour Conversion Graph - colour.graph
#
Starting with version 0.3.14, Colour implements an automatic colour conversion graph enabling easier colour conversions.

>>> sd = colour.SDS_COLOURCHECKERS["ColorChecker N Ohta"]["dark skin"]
>>> colour.convert(
... sd, "Spectral Distribution", "sRGB", verbose={"mode": "Short"}
... )
===============================================================================
* *
* [ Conversion Path ] *
* *
* "sd_to_XYZ" --> "XYZ_to_sRGB" *
* *
===============================================================================
array([ 0.45675795, 0.30986982, 0.24861924])
>>> illuminant = colour.SDS_ILLUMINANTS["FL2"]
>>> colour.convert(
... sd,
... "Spectral Distribution",
... "sRGB",
... sd_to_XYZ={"illuminant": illuminant},
... )
array([ 0.47924575, 0.31676968, 0.17362725])
3.2 Chromatic Adaptation - colour.adaptation
#
>>> XYZ = [0.20654008, 0.12197225, 0.05136952]
>>> D65 = colour.CCS_ILLUMINANTS["CIE 1931 2 Degree Standard Observer"][
... "D65"
... ]
>>> A = colour.CCS_ILLUMINANTS["CIE 1931 2 Degree Standard Observer"]["A"]
>>> colour.chromatic_adaptation(
... XYZ, colour.xy_to_XYZ(D65), colour.xy_to_XYZ(A)
... )
array([ 0.2533053 , 0.13765138, 0.01543307])
>>> sorted(colour.CHROMATIC_ADAPTATION_METHODS)
['CIE 1994', 'CMCCAT2000', 'Fairchild 1990', 'Von Kries', 'Zhai 2018']
3.3 Algebra - colour.algebra
#
3.3.1 Kernel Interpolation#
>>> y = [5.9200, 9.3700, 10.8135, 4.5100, 69.5900, 27.8007, 86.0500]
>>> x = range(len(y))
>>> colour.KernelInterpolator(x, y)([0.25, 0.75, 5.50])
array([ 6.18062083, 8.08238488, 57.85783403])
3.3.2 Sprague (1880) Interpolation#
>>> y = [5.9200, 9.3700, 10.8135, 4.5100, 69.5900, 27.8007, 86.0500]
>>> x = range(len(y))
>>> colour.SpragueInterpolator(x, y)([0.25, 0.75, 5.50])
array([ 6.72951612, 7.81406251, 43.77379185])
3.4 Colour Appearance Models - colour.appearance
#
>>> XYZ = [0.20654008 * 100, 0.12197225 * 100, 0.05136952 * 100]
>>> XYZ_w = [95.05, 100.00, 108.88]
>>> L_A = 318.31
>>> Y_b = 20.0
>>> colour.XYZ_to_CIECAM02(XYZ, XYZ_w, L_A, Y_b)
CAM_Specification_CIECAM02(J=34.434525727858997, C=67.365010921125943, h=22.279164147957065, s=62.81485585332716, Q=177.47124941102123, M=70.024939419291414, H=2.6896085344238898, HC=None)
>>> colour.XYZ_to_CIECAM16(XYZ, XYZ_w, L_A, Y_b)
CAM_Specification_CIECAM16(J=34.434525727858997, C=67.365010921125943, h=22.279164147957065, s=62.81485585332716, Q=177.47124941102123, M=70.024939419291414, H=2.6896085344238898, HC=None)
>>> colour.XYZ_to_CAM16(XYZ, XYZ_w, L_A, Y_b)
CAM_Specification_CAM16(J=33.880368498111686, C=69.444353357408033, h=19.510887327451748, s=64.03612114840314, Q=176.03752758512178, M=72.18638534116765, H=399.52975599115319, HC=None)
>>> colour.XYZ_to_Hellwig2022(XYZ, XYZ_w, L_A)
CAM_Specification_Hellwig2022(J=33.880368498111686, C=40.347043294550311, h=19.510887327451748, s=117.38555017188679, Q=45.34489577734751, M=53.228355383108031, H=399.52975599115319, HC=None)
>>> colour.XYZ_to_Kim2009(XYZ, XYZ_w, L_A)
CAM_Specification_Kim2009(J=19.879918542450902, C=55.839055250876946, h=22.013388165090046, s=112.97979354939129, Q=36.309026130161449, M=46.346415858227864, H=2.3543198369639931, HC=None)
>>> colour.XYZ_to_ZCAM(XYZ, XYZ_w, L_A, Y_b)
CAM_Specification_ZCAM(J=38.347186278956357, C=21.12138989208518, h=33.711578931095197, s=81.444585609489536, Q=76.986725284523772, M=42.403805833900506, H=0.45779200212219573, HC=None, V=43.623590687423544, K=43.20894953152817, W=34.829588380192149)
3.5 Colour Blindness - colour.blindness
#
>>> import numpy as np
>>> cmfs = colour.LMS_CMFS["Stockman & Sharpe 2 Degree Cone Fundamentals"]
>>> colour.msds_cmfs_anomalous_trichromacy_Machado2009(
... cmfs, np.array([15, 0, 0])
... )[450]
array([ 0.08912884, 0.0870524 , 0.955393 ])
>>> primaries = colour.MSDS_DISPLAY_PRIMARIES["Apple Studio Display"]
>>> d_LMS = (15, 0, 0)
>>> colour.matrix_anomalous_trichromacy_Machado2009(cmfs, primaries, d_LMS)
array([[-0.27774652, 2.65150084, -1.37375432],
[ 0.27189369, 0.20047862, 0.52762768],
[ 0.00644047, 0.25921579, 0.73434374]])
3.6 Colour Correction - colour characterisation
#
>>> import numpy as np
>>> RGB = [0.17224810, 0.09170660, 0.06416938]
>>> M_T = np.random.random((24, 3))
>>> M_R = M_T + (np.random.random((24, 3)) - 0.5) * 0.5
>>> colour.colour_correction(RGB, M_T, M_R)
array([ 0.1806237 , 0.07234791, 0.07848845])
>>> sorted(colour.COLOUR_CORRECTION_METHODS)
['Cheung 2004', 'Finlayson 2015', 'Vandermonde']
3.7 ACES Input Transform - colour characterisation
#
>>> sensitivities = colour.MSDS_CAMERA_SENSITIVITIES["Nikon 5100 (NPL)"]
>>> illuminant = colour.SDS_ILLUMINANTS["D55"]
>>> colour.matrix_idt(sensitivities, illuminant)
(array([[ 0.46579986, 0.13409221, 0.01935163],
[ 0.01786092, 0.77557268, -0.16775531],
[ 0.03458647, -0.16152923, 0.74270363]]), array([ 1.58214188, 1. , 1.28910346]))
3.8 Colorimetry - colour.colorimetry
#
3.8.1 Spectral Computations#
>>> colour.sd_to_XYZ(colour.SDS_LIGHT_SOURCES["Neodimium Incandescent"])
array([ 36.94726204, 32.62076174, 13.0143849 ])
>>> sorted(colour.SPECTRAL_TO_XYZ_METHODS)
['ASTM E308', 'Integration', 'astm2015']
3.8.2 Multi-Spectral Computations#
>>> msds = np.array(
... [
... [
... [
... 0.01367208,
... 0.09127947,
... 0.01524376,
... 0.02810712,
... 0.19176012,
... 0.04299992,
... ],
... [
... 0.00959792,
... 0.25822842,
... 0.41388571,
... 0.22275120,
... 0.00407416,
... 0.37439537,
... ],
... [
... 0.01791409,
... 0.29707789,
... 0.56295109,
... 0.23752193,
... 0.00236515,
... 0.58190280,
... ],
... ],
... [
... [
... 0.01492332,
... 0.10421912,
... 0.02240025,
... 0.03735409,
... 0.57663846,
... 0.32416266,
... ],
... [
... 0.04180972,
... 0.26402685,
... 0.03572137,
... 0.00413520,
... 0.41808194,
... 0.24696727,
... ],
... [
... 0.00628672,
... 0.11454948,
... 0.02198825,
... 0.39906919,
... 0.63640803,
... 0.01139849,
... ],
... ],
... [
... [
... 0.04325933,
... 0.26825359,
... 0.23732357,
... 0.05175860,
... 0.01181048,
... 0.08233768,
... ],
... [
... 0.02484169,
... 0.12027161,
... 0.00541695,
... 0.00654612,
... 0.18603799,
... 0.36247808,
... ],
... [
... 0.03102159,
... 0.16815442,
... 0.37186235,
... 0.08610666,
... 0.00413520,
... 0.78492409,
... ],
... ],
... [
... [
... 0.11682307,
... 0.78883040,
... 0.74468607,
... 0.83375293,
... 0.90571451,
... 0.70054168,
... ],
... [
... 0.06321812,
... 0.41898224,
... 0.15190357,
... 0.24591440,
... 0.55301750,
... 0.00657664,
... ],
... [
... 0.00305180,
... 0.11288624,
... 0.11357290,
... 0.12924391,
... 0.00195315,
... 0.21771573,
... ],
... ],
... ]
... )
>>> colour.msds_to_XYZ(
... msds,
... method="Integration",
... shape=colour.SpectralShape(400, 700, 60),
... )
array([[[ 7.68544647, 4.09414317, 8.49324254],
[ 17.12567298, 27.77681821, 25.52573685],
[ 19.10280411, 34.45851476, 29.76319628]],
[[ 18.03375827, 8.62340812, 9.71702574],
[ 15.03110867, 6.54001068, 24.53208465],
[ 37.68269495, 26.4411103 , 10.66361816]],
[[ 8.09532373, 12.75333339, 25.79613956],
[ 7.09620297, 2.79257389, 11.15039854],
[ 8.933163 , 19.39985815, 17.14915636]],
[[ 80.00969553, 80.39810464, 76.08184429],
[ 33.27611427, 24.38947838, 39.34919287],
[ 8.89425686, 11.05185138, 10.86767594]]])
>>> sorted(colour.MSDS_TO_XYZ_METHODS)
['ASTM E308', 'Integration', 'astm2015']
3.8.3 Blackbody Spectral Radiance Computation#
>>> colour.sd_blackbody(5000)
SpectralDistribution([[ 3.60000000e+02, 6.65427827e+12],
[ 3.61000000e+02, 6.70960528e+12],
[ 3.62000000e+02, 6.76482512e+12],
...
[ 7.78000000e+02, 1.06068004e+13],
[ 7.79000000e+02, 1.05903327e+13],
[ 7.80000000e+02, 1.05738520e+13]],
interpolator=SpragueInterpolator,
interpolator_args={},
extrapolator=Extrapolator,
extrapolator_args={'right': None, 'method': 'Constant', 'left': None})
3.8.4 Dominant, Complementary Wavelength & Colour Purity Computation#
>>> xy = [0.54369557, 0.32107944]
>>> xy_n = [0.31270000, 0.32900000]
>>> colour.dominant_wavelength(xy, xy_n)
(array(616.0),
array([ 0.68354746, 0.31628409]),
array([ 0.68354746, 0.31628409]))
3.8.5 Lightness Computation#
>>> colour.lightness(12.19722535)
41.527875844653451
>>> sorted(colour.LIGHTNESS_METHODS)
['Abebe 2017',
'CIE 1976',
'Fairchild 2010',
'Fairchild 2011',
'Glasser 1958',
'Lstar1976',
'Wyszecki 1963']
3.8.6 Luminance Computation#
>>> colour.luminance(41.52787585)
12.197225353400775
>>> sorted(colour.LUMINANCE_METHODS)
['ASTM D1535',
'CIE 1976',
'Fairchild 2010',
'Fairchild 2011',
'Newhall 1943',
'astm2008',
'cie1976']
3.8.7 Whiteness Computation#
>>> XYZ = [95.00000000, 100.00000000, 105.00000000]
>>> XYZ_0 = [94.80966767, 100.00000000, 107.30513595]
>>> colour.whiteness(XYZ, XYZ_0)
array([ 93.756 , -1.33000001])
>>> sorted(colour.WHITENESS_METHODS)
['ASTM E313',
'Berger 1959',
'CIE 2004',
'Ganz 1979',
'Stensby 1968',
'Taube 1960',
'cie2004']
3.8.8 Yellowness Computation#
>>> XYZ = [95.00000000, 100.00000000, 105.00000000]
>>> colour.yellowness(XYZ)
4.3400000000000034
>>> sorted(colour.YELLOWNESS_METHODS)
['ASTM D1925', 'ASTM E313', 'ASTM E313 Alternative']
3.8.9 Luminous Flux, Efficiency & Efficacy Computation#
>>> sd = colour.SDS_LIGHT_SOURCES["Neodimium Incandescent"]
>>> colour.luminous_flux(sd)
23807.655527367202
>>> sd = colour.SDS_LIGHT_SOURCES["Neodimium Incandescent"]
>>> colour.luminous_efficiency(sd)
0.19943935624521045
>>> sd = colour.SDS_LIGHT_SOURCES["Neodimium Incandescent"]
>>> colour.luminous_efficacy(sd)
136.21708031547874
3.9 Contrast Sensitivity Function - colour.contrast
#
>>> colour.contrast_sensitivity_function(u=4, X_0=60, E=65)
358.51180789884984
>>> sorted(colour.CONTRAST_SENSITIVITY_METHODS)
['Barten 1999']
3.10 Colour Difference - colour.difference
#
>>> Lab_1 = [100.00000000, 21.57210357, 272.22819350]
>>> Lab_2 = [100.00000000, 426.67945353, 72.39590835]
>>> colour.delta_E(Lab_1, Lab_2)
94.035649026659485
>>> sorted(colour.DELTA_E_METHODS)
['CAM02-LCD',
'CAM02-SCD',
'CAM02-UCS',
'CAM16-LCD',
'CAM16-SCD',
'CAM16-UCS',
'CIE 1976',
'CIE 1994',
'CIE 2000',
'CMC',
'DIN99',
'ITP',
'cie1976',
'cie1994',
'cie2000']
3.11 IO - colour.io
#
3.11.1 Images#
>>> RGB = colour.read_image("Ishihara_Colour_Blindness_Test_Plate_3.png")
>>> RGB.shape
(276, 281, 3)
3.11.2 Look Up Table (LUT) Data#
>>> LUT = colour.read_LUT("ACES_Proxy_10_to_ACES.cube")
>>> print(LUT)
LUT3x1D - ACES Proxy 10 to ACES
-------------------------------
Dimensions : 2
Domain : [[0 0 0]
[1 1 1]]
Size : (32, 3)
>>> RGB = [0.17224810, 0.09170660, 0.06416938]
>>> LUT.apply(RGB)
array([ 0.00575674, 0.00181493, 0.00121419])
3.12 Colour Models - colour.models
#
3.12.1 CIE xyY Colourspace#
>>> colour.XYZ_to_xyY([0.20654008, 0.12197225, 0.05136952])
array([ 0.54369557, 0.32107944, 0.12197225])
3.12.2 CIE L*a*b* Colourspace#
>>> colour.XYZ_to_Lab([0.20654008, 0.12197225, 0.05136952])
array([ 41.52787529, 52.63858304, 26.92317922])
3.12.3 CIE L*u*v* Colourspace#
>>> colour.XYZ_to_Luv([0.20654008, 0.12197225, 0.05136952])
array([ 41.52787529, 96.83626054, 17.75210149])
3.12.4 CIE 1960 UCS Colourspace#
>>> colour.XYZ_to_UCS([0.20654008, 0.12197225, 0.05136952])
array([ 0.13769339, 0.12197225, 0.1053731 ])
3.12.5 CIE 1964 U*V*W* Colourspace#
>>> XYZ = [0.20654008 * 100, 0.12197225 * 100, 0.05136952 * 100]
>>> colour.XYZ_to_UVW(XYZ)
array([ 94.55035725, 11.55536523, 40.54757405])
3.12.6 Hunter L,a,b Colour Scale#
>>> XYZ = [0.20654008 * 100, 0.12197225 * 100, 0.05136952 * 100]
>>> colour.XYZ_to_Hunter_Lab(XYZ)
array([ 34.92452577, 47.06189858, 14.38615107])
3.12.7 Hunter Rd,a,b Colour Scale#
>>> XYZ = [0.20654008 * 100, 0.12197225 * 100, 0.05136952 * 100]
>>> colour.XYZ_to_Hunter_Rdab(XYZ)
array([ 12.197225 , 57.12537874, 17.46241341])
3.12.8 CAM02-LCD, CAM02-SCD, and CAM02-UCS Colourspaces - Luo, Cui and Li (2006)#
>>> XYZ = [0.20654008 * 100, 0.12197225 * 100, 0.05136952 * 100]
>>> XYZ_w = [95.05, 100.00, 108.88]
>>> L_A = 318.31
>>> Y_b = 20.0
>>> surround = colour.VIEWING_CONDITIONS_CIECAM02["Average"]
>>> specification = colour.XYZ_to_CIECAM02(XYZ, XYZ_w, L_A, Y_b, surround)
>>> JMh = (specification.J, specification.M, specification.h)
>>> colour.JMh_CIECAM02_to_CAM02UCS(JMh)
array([ 47.16899898, 38.72623785, 15.8663383 ])
>>> XYZ = [0.20654008, 0.12197225, 0.05136952]
>>> XYZ_w = [95.05 / 100, 100.00 / 100, 108.88 / 100]
>>> colour.XYZ_to_CAM02UCS(XYZ, XYZ_w=XYZ_w, L_A=L_A, Y_b=Y_b)
array([ 47.16899898, 38.72623785, 15.8663383 ])
3.12.9 CAM16-LCD, CAM16-SCD, and CAM16-UCS Colourspaces - Li et al. (2017)#
>>> XYZ = [0.20654008 * 100, 0.12197225 * 100, 0.05136952 * 100]
>>> XYZ_w = [95.05, 100.00, 108.88]
>>> L_A = 318.31
>>> Y_b = 20.0
>>> surround = colour.VIEWING_CONDITIONS_CAM16["Average"]
>>> specification = colour.XYZ_to_CAM16(XYZ, XYZ_w, L_A, Y_b, surround)
>>> JMh = (specification.J, specification.M, specification.h)
>>> colour.JMh_CAM16_to_CAM16UCS(JMh)
array([ 46.55542238, 40.22460974, 14.25288392]
>>> XYZ = [0.20654008, 0.12197225, 0.05136952]
>>> XYZ_w = [95.05 / 100, 100.00 / 100, 108.88 / 100]
>>> colour.XYZ_to_CAM16UCS(XYZ, XYZ_w=XYZ_w, L_A=L_A, Y_b=Y_b)
array([ 46.55542238, 40.22460974, 14.25288392])
3.12.10 ICaCb Colourspace#
>>> XYZ_to_ICaCb(np.array([0.20654008, 0.12197225, 0.05136952]))
array([ 0.06875297, 0.05753352, 0.02081548])
3.12.11 IgPgTg Colourspace#
>>> colour.XYZ_to_IgPgTg([0.20654008, 0.12197225, 0.05136952])
array([ 0.42421258, 0.18632491, 0.10689223])
3.12.12 IPT Colourspace#
>>> colour.XYZ_to_IPT([0.20654008, 0.12197225, 0.05136952])
array([ 0.38426191, 0.38487306, 0.18886838])
3.12.13 Munish Ragoo and Farup (2021) Optimised IPT Colourspace#
>>> colour.XYZ_to_IPT_Munish2021([0.20654008, 0.12197225, 0.05136952])
array([ 0.42248243, 0.2910514 , 0.20410663])
3.12.14 DIN99 Colourspace#
>>> Lab = [41.52787529, 52.63858304, 26.92317922]
>>> colour.Lab_to_DIN99(Lab)
array([ 53.22821988, 28.41634656, 3.89839552])
3.12.15 hdr-CIELAB Colourspace#
>>> colour.XYZ_to_hdr_CIELab([0.20654008, 0.12197225, 0.05136952])
array([ 51.87002062, 60.4763385 , 32.14551912])
3.12.16 hdr-IPT Colourspace#
>>> colour.XYZ_to_hdr_IPT([0.20654008, 0.12197225, 0.05136952])
array([ 25.18261761, -22.62111297, 3.18511729])
3.12.17 Oklab Colourspace#
>>> colour.XYZ_to_Oklab([0.20654008, 0.12197225, 0.05136952])
array([ 0.51634019, 0.154695 , 0.06289579])
3.12.18 OSA UCS Colourspace#
>>> XYZ = [0.20654008 * 100, 0.12197225 * 100, 0.05136952 * 100]
>>> colour.XYZ_to_OSA_UCS(XYZ)
array([-3.0049979 , 2.99713697, -9.66784231])
3.12.19 ProLab Colourspace#
>>> colour.XYZ_to_ProLab([0.51634019, 0.15469500, 0.06289579])
array([1.24610688, 2.39525236, 0.41902126])
3.12.20 Jzazbz Colourspace#
>>> colour.XYZ_to_Jzazbz([0.20654008, 0.12197225, 0.05136952])
array([ 0.00535048, 0.00924302, 0.00526007])
3.12.21 Y’CbCr Colour Encoding#
>>> colour.RGB_to_YCbCr([1.0, 1.0, 1.0])
array([ 0.92156863, 0.50196078, 0.50196078])
3.12.22 YCoCg Colour Encoding#
>>> colour.RGB_to_YCoCg([0.75, 0.75, 0.0])
array([ 0.5625, 0.375 , 0.1875])
3.12.23 ICtCp Colour Encoding#
>>> colour.RGB_to_ICtCp([0.45620519, 0.03081071, 0.04091952])
array([ 0.07351364, 0.00475253, 0.09351596])
3.12.24 HSV Colourspace#
>>> colour.RGB_to_HSV([0.45620519, 0.03081071, 0.04091952])
array([ 0.99603944, 0.93246304, 0.45620519])
3.12.25 IHLS Colourspace#
>>> colour.RGB_to_IHLS([0.45620519, 0.03081071, 0.04091952])
array([ 6.26236117, 0.12197943, 0.42539448])
3.12.26 Prismatic Colourspace#
>>> colour.RGB_to_Prismatic([0.25, 0.50, 0.75])
array([ 0.75 , 0.16666667, 0.33333333, 0.5 ])
3.12.27 RGB Colourspace and Transformations#
>>> XYZ = [0.21638819, 0.12570000, 0.03847493]
>>> illuminant_XYZ = [0.34570, 0.35850]
>>> illuminant_RGB = [0.31270, 0.32900]
>>> chromatic_adaptation_transform = "Bradford"
>>> matrix_XYZ_to_RGB = [
... [3.24062548, -1.53720797, -0.49862860],
... [-0.96893071, 1.87575606, 0.04151752],
... [0.05571012, -0.20402105, 1.05699594],
... ]
>>> colour.XYZ_to_RGB(
... XYZ,
... illuminant_XYZ,
... illuminant_RGB,
... matrix_XYZ_to_RGB,
... chromatic_adaptation_transform,
... )
array([ 0.45595571, 0.03039702, 0.04087245])
3.12.28 RGB Colourspace Derivation#
>>> p = [0.73470, 0.26530, 0.00000, 1.00000, 0.00010, -0.07700]
>>> w = [0.32168, 0.33767]
>>> colour.normalised_primary_matrix(p, w)
array([[ 9.52552396e-01, 0.00000000e+00, 9.36786317e-05],
[ 3.43966450e-01, 7.28166097e-01, -7.21325464e-02],
[ 0.00000000e+00, 0.00000000e+00, 1.00882518e+00]])
3.12.29 RGB Colourspaces#
>>> sorted(colour.RGB_COLOURSPACES)
['ACES2065-1',
'ACEScc',
'ACEScct',
'ACEScg',
'ACESproxy',
'ARRI Wide Gamut 3',
'ARRI Wide Gamut 4',
'Adobe RGB (1998)',
'Adobe Wide Gamut RGB',
'Apple RGB',
'Best RGB',
'Beta RGB',
'Blackmagic Wide Gamut',
'CIE RGB',
'Cinema Gamut',
'ColorMatch RGB',
'DCDM XYZ',
'DCI-P3',
'DCI-P3-P',
'DJI D-Gamut',
'DRAGONcolor',
'DRAGONcolor2',
'DaVinci Wide Gamut',
'Display P3',
'Don RGB 4',
'EBU Tech. 3213-E',
'ECI RGB v2',
'ERIMM RGB',
'Ekta Space PS 5',
'F-Gamut',
'FilmLight E-Gamut',
'ITU-R BT.2020',
'ITU-R BT.470 - 525',
'ITU-R BT.470 - 625',
'ITU-R BT.709',
'ITU-T H.273 - 22 Unspecified',
'ITU-T H.273 - Generic Film',
'Max RGB',
'N-Gamut',
'NTSC (1953)',
'NTSC (1987)',
'P3-D65',
'Pal/Secam',
'ProPhoto RGB',
'Protune Native',
'REDWideGamutRGB',
'REDcolor',
'REDcolor2',
'REDcolor3',
'REDcolor4',
'RIMM RGB',
'ROMM RGB',
'Russell RGB',
'S-Gamut',
'S-Gamut3',
'S-Gamut3.Cine',
'SMPTE 240M',
'SMPTE C',
'Sharp RGB',
'V-Gamut',
'Venice S-Gamut3',
'Venice S-Gamut3.Cine',
'Xtreme RGB',
'aces',
'adobe1998',
'prophoto',
'sRGB']
3.12.30 OETFs#
>>> sorted(colour.OETFS)
['ARIB STD-B67',
'Blackmagic Film Generation 5',
'DaVinci Intermediate',
'ITU-R BT.2020',
'ITU-R BT.2100 HLG',
'ITU-R BT.2100 PQ',
'ITU-R BT.601',
'ITU-R BT.709',
'ITU-T H.273 IEC 61966-2',
'ITU-T H.273 Log',
'ITU-T H.273 Log Sqrt',
'SMPTE 240M']
3.12.31 EOTFs#
>>> sorted(colour.EOTFS)
['DCDM',
'DICOM GSDF',
'ITU-R BT.1886',
'ITU-R BT.2100 HLG',
'ITU-R BT.2100 PQ',
'ITU-T H.273 ST.428-1',
'SMPTE 240M',
'ST 2084',
'sRGB']
3.12.32 OOTFs#
>>> sorted(colour.OOTFS)
['ITU-R BT.2100 HLG', 'ITU-R BT.2100 PQ']
3.12.33 Log Encoding / Decoding#
>>> sorted(colour.LOG_ENCODINGS)
['ACEScc',
'ACEScct',
'ACESproxy',
'ARRI LogC3',
'ARRI LogC4',
'Canon Log',
'Canon Log 2',
'Canon Log 3',
'Cineon',
'D-Log',
'ERIMM RGB',
'F-Log',
'Filmic Pro 6',
'L-Log',
'Log2',
'Log3G10',
'Log3G12',
'N-Log',
'PLog',
'Panalog',
'Protune',
'REDLog',
'REDLogFilm',
'S-Log',
'S-Log2',
'S-Log3',
'T-Log',
'V-Log',
'ViperLog']
3.12.34 CCTFs Encoding / Decoding#
>>> sorted(colour.CCTF_ENCODINGS)
['ACEScc',
'ACEScct',
'ACESproxy',
'ARRI LogC3',
'ARRI LogC4',
'ARIB STD-B67',
'Canon Log',
'Canon Log 2',
'Canon Log 3',
'Cineon',
'D-Log',
'DCDM',
'DICOM GSDF',
'ERIMM RGB',
'F-Log',
'Filmic Pro 6',
'Gamma 2.2',
'Gamma 2.4',
'Gamma 2.6',
'ITU-R BT.1886',
'ITU-R BT.2020',
'ITU-R BT.2100 HLG',
'ITU-R BT.2100 PQ',
'ITU-R BT.601',
'ITU-R BT.709',
'Log2',
'Log3G10',
'Log3G12',
'PLog',
'Panalog',
'ProPhoto RGB',
'Protune',
'REDLog',
'REDLogFilm',
'RIMM RGB',
'ROMM RGB',
'S-Log',
'S-Log2',
'S-Log3',
'SMPTE 240M',
'ST 2084',
'T-Log',
'V-Log',
'ViperLog',
'sRGB']
3.12.35 Recommendation ITU-T H.273 Code points for Video Signal Type Identification#
>>> colour.COLOUR_PRIMARIES_ITUTH273.keys()
dict_keys([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 22, 23])
>>> colour.COLOUR_PRIMARIES_ITUTH273.keys()
>>> description = colour.models.describe_video_signal_colour_primaries(1)
===============================================================================
* *
* Colour Primaries: 1 *
* ------------------- *
* *
* Primaries : [[ 0.64 0.33] *
* [ 0.3 0.6 ] *
* [ 0.15 0.06]] *
* Whitepoint : [ 0.3127 0.329 ] *
* Whitepoint Name : D65 *
* NPM : [[ 0.4123908 0.35758434 0.18048079] *
* [ 0.21263901 0.71516868 0.07219232] *
* [ 0.01933082 0.11919478 0.95053215]] *
* NPM -1 : [[ 3.24096994 -1.53738318 -0.49861076] *
* [-0.96924364 1.8759675 0.04155506] *
* [ 0.05563008 -0.20397696 1.05697151]] *
* FFmpeg Constants : ['AVCOL_PRI_BT709', 'BT709'] *
* *
===============================================================================
>>> colour.TRANSFER_CHARACTERISTICS_ITUTH273.keys()
dict_keys([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19])
>>> description = (
... colour.models.describe_video_signal_transfer_characteristics(1)
... )
===============================================================================
* *
* Transfer Characteristics: 1 *
* --------------------------- *
* *
* Function : <function oetf_BT709 at 0x165bb3550> *
* FFmpeg Constants : ['AVCOL_TRC_BT709', 'BT709'] *
* *
===============================================================================
>>> colour.MATRIX_COEFFICIENTS_ITUTH273.keys()
dict_keys([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15])
>>> description = colour.models.describe_video_signal_matrix_coefficients(
... 1
... )
===============================================================================
* *
* Matrix Coefficients: 1 *
* ---------------------- *
* *
* Matrix Coefficients : [ 0.2126 0.0722] *
* FFmpeg Constants : ['AVCOL_SPC_BT709', 'BT709'] *
* *
===============================================================================
3.13 Colour Notation Systems - colour.notation
#
3.13.1 Munsell Value#
>>> colour.munsell_value(12.23634268)
4.0824437076525664
>>> sorted(colour.MUNSELL_VALUE_METHODS)
['ASTM D1535',
'Ladd 1955',
'McCamy 1987',
'Moon 1943',
'Munsell 1933',
'Priest 1920',
'Saunderson 1944',
'astm2008']
3.13.2 Munsell Colour#
>>> colour.xyY_to_munsell_colour([0.38736945, 0.35751656, 0.59362000])
'4.2YR 8.1/5.3'
>>> colour.munsell_colour_to_xyY("4.2YR 8.1/5.3")
array([ 0.38736945, 0.35751656, 0.59362 ])
3.14 Optical Phenomena - colour.phenomena
#
>>> colour.rayleigh_scattering_sd()
SpectralDistribution([[ 3.60000000e+02, 5.99101337e-01],
[ 3.61000000e+02, 5.92170690e-01],
[ 3.62000000e+02, 5.85341006e-01],
...
[ 7.78000000e+02, 2.55208377e-02],
[ 7.79000000e+02, 2.53887969e-02],
[ 7.80000000e+02, 2.52576106e-02]],
interpolator=SpragueInterpolator,
interpolator_args={},
extrapolator=Extrapolator,
extrapolator_args={'right': None, 'method': 'Constant', 'left': None})
3.15 Light Quality - colour.quality
#
3.15.1 Colour Fidelity Index#
>>> colour.colour_fidelity_index(colour.SDS_ILLUMINANTS["FL2"])
70.120825477833037
>>> sorted(colour.COLOUR_FIDELITY_INDEX_METHODS)
['ANSI/IES TM-30-18', 'CIE 2017']
3.15.2 Colour Rendering Index#
>>> colour.colour_quality_scale(colour.SDS_ILLUMINANTS["FL2"])
64.111703163816699
>>> sorted(colour.COLOUR_QUALITY_SCALE_METHODS)
['NIST CQS 7.4', 'NIST CQS 9.0']
3.15.3 Colour Quality Scale#
>>> colour.colour_rendering_index(colour.SDS_ILLUMINANTS["FL2"])
64.233724121664807
3.15.4 Academy Spectral Similarity Index (SSI)#
>>> colour.spectral_similarity_index(
... colour.SDS_ILLUMINANTS["C"], colour.SDS_ILLUMINANTS["D65"]
... )
94.0
3.16 Spectral Up-Sampling & Recovery - colour.recovery
#
3.16.1 Reflectance Recovery#
>>> colour.XYZ_to_sd([0.20654008, 0.12197225, 0.05136952])
SpectralDistribution([[ 3.60000000e+02, 8.40144095e-02],
[ 3.65000000e+02, 8.41264236e-02],
[ 3.70000000e+02, 8.40057597e-02],
...
[ 7.70000000e+02, 4.46743012e-01],
[ 7.75000000e+02, 4.46817187e-01],
[ 7.80000000e+02, 4.46857696e-01]],
SpragueInterpolator,
{},
Extrapolator,
{'method': 'Constant', 'left': None, 'right': None})
>>> sorted(colour.REFLECTANCE_RECOVERY_METHODS)
['Jakob 2019', 'Mallett 2019', 'Meng 2015', 'Otsu 2018', 'Smits 1999']
3.16.2 Camera RGB Sensitivities Recovery#
>>> illuminant = colour.colorimetry.SDS_ILLUMINANTS["D65"]
>>> sensitivities = colour.characterisation.MSDS_CAMERA_SENSITIVITIES[
... "Nikon 5100 (NPL)"
... ]
>>> reflectances = [
... sd.copy().align(
... colour.recovery.SPECTRAL_SHAPE_BASIS_FUNCTIONS_DYER2017
... )
... for sd in colour.characterisation.SDS_COLOURCHECKERS[
... "BabelColor Average"
... ].values()
... ]
>>> reflectances = colour.colorimetry.sds_and_msds_to_msds(reflectances)
>>> RGB = colour.colorimetry.msds_to_XYZ(
... reflectances,
... method="Integration",
... cmfs=sensitivities,
... illuminant=illuminant,
... k=0.01,
... shape=colour.recovery.SPECTRAL_SHAPE_BASIS_FUNCTIONS_DYER2017,
... )
>>> colour.recovery.RGB_to_msds_camera_sensitivities_Jiang2013(
... RGB,
... illuminant,
... reflectances,
... colour.recovery.BASIS_FUNCTIONS_DYER2017,
... colour.recovery.SPECTRAL_SHAPE_BASIS_FUNCTIONS_DYER2017,
... )
RGB_CameraSensitivities([[ 4.00000000e+02, 7.22815777e-03, 9.22506480e-03,
-9.88368972e-03],
[ 4.10000000e+02, -8.50457609e-03, 1.12777480e-02,
3.86248655e-03],
[ 4.20000000e+02, 4.58191132e-02, 7.15520948e-02,
4.04068293e-01],
...
[ 6.80000000e+02, 4.08276173e-02, 5.55290476e-03,
1.39907862e-03],
[ 6.90000000e+02, -3.71437574e-03, 2.50935640e-03,
3.97652622e-04],
[ 7.00000000e+02, -5.62256563e-03, 1.56433970e-03,
5.84726936e-04]],
['red', 'green', 'blue'],
SpragueInterpolator,
{},
Extrapolator,
{'method': 'Constant', 'left': None, 'right': None})
3.18 Colour Volume - colour.volume
#
>>> colour.RGB_colourspace_volume_MonteCarlo(
... colour.RGB_COLOURSPACE_RGB["sRGB"]
... )
821958.30000000005
3.19 Geometry Primitives Generation - colour.geometry
#
>>> colour.primitive("Grid")
(array([ ([-0.5, 0.5, 0. ], [ 0., 1.], [ 0., 0., 1.], [ 0., 1., 0., 1.]),
([ 0.5, 0.5, 0. ], [ 1., 1.], [ 0., 0., 1.], [ 1., 1., 0., 1.]),
([-0.5, -0.5, 0. ], [ 0., 0.], [ 0., 0., 1.], [ 0., 0., 0., 1.]),
([ 0.5, -0.5, 0. ], [ 1., 0.], [ 0., 0., 1.], [ 1., 0., 0., 1.])],
dtype=[('position', '<f4', (3,)), ('uv', '<f4', (2,)), ('normal', '<f4', (3,)), ('colour', '<f4', (4,))]), array([[0, 2, 1],
[2, 3, 1]], dtype=uint32), array([[0, 2],
[2, 3],
[3, 1],
[1, 0]], dtype=uint32))
>>> sorted(colour.PRIMITIVE_METHODS)
['Cube', 'Grid']
>>> colour.primitive_vertices("Quad MPL")
array([[ 0., 0., 0.],
[ 1., 0., 0.],
[ 1., 1., 0.],
[ 0., 1., 0.]])
>>> sorted(colour.PRIMITIVE_VERTICES_METHODS)
['Cube MPL', 'Grid MPL', 'Quad MPL', 'Sphere']
3.20 Plotting - colour.plotting
#
Most of the objects are available from the colour.plotting
namespace:
>>> from colour.plotting import *
>>> colour_style()
3.20.1 Visible Spectrum#
>>> plot_visible_spectrum("CIE 1931 2 Degree Standard Observer")

3.20.2 Spectral Distribution#
>>> plot_single_illuminant_sd("FL1")

3.20.3 Blackbody#
>>> blackbody_sds = [
... colour.sd_blackbody(i, colour.SpectralShape(0, 10000, 10))
... for i in range(1000, 15000, 1000)
... ]
>>> plot_multi_sds(
... blackbody_sds,
... y_label="W / (sr m$^2$) / m",
... plot_kwargs={"use_sd_colours": True, "normalise_sd_colours": True},
... legend_location="upper right",
... bounding_box=(0, 1250, 0, 2.5e6),
... )

3.20.4 Colour Matching Functions#
>>> plot_single_cmfs(
... "Stockman & Sharpe 2 Degree Cone Fundamentals",
... y_label="Sensitivity",
... bounding_box=(390, 870, 0, 1.1),
... )

3.20.5 Luminous Efficiency#
>>> sd_mesopic_luminous_efficiency_function = (
... colour.sd_mesopic_luminous_efficiency_function(0.2)
... )
>>> plot_multi_sds(
... (
... sd_mesopic_luminous_efficiency_function,
... colour.PHOTOPIC_LEFS["CIE 1924 Photopic Standard Observer"],
... colour.SCOTOPIC_LEFS["CIE 1951 Scotopic Standard Observer"],
... ),
... y_label="Luminous Efficiency",
... legend_location="upper right",
... y_tighten=True,
... margins=(0, 0, 0, 0.1),
... )

3.20.6 Colour Checker#
>>> from colour.characterisation.dataset.colour_checkers.sds import (
... COLOURCHECKER_INDEXES_TO_NAMES_MAPPING,
... )
>>> plot_multi_sds(
... [
... colour.SDS_COLOURCHECKERS["BabelColor Average"][value]
... for key, value in sorted(
... COLOURCHECKER_INDEXES_TO_NAMES_MAPPING.items()
... )
... ],
... plot_kwargs={
... "use_sd_colours": True,
... },
... title=("BabelColor Average - " "Spectral Distributions"),
... )

>>> plot_single_colour_checker(
... "ColorChecker 2005", text_kwargs={"visible": False}
... )

3.20.7 Chromaticities Prediction#
>>> plot_corresponding_chromaticities_prediction(
... 2, "Von Kries", "Bianco 2010"
... )

3.20.8 Chromaticities#
>>> import numpy as np
>>> RGB = np.random.random((32, 32, 3))
>>> plot_RGB_chromaticities_in_chromaticity_diagram_CIE1931(
... RGB,
... "ITU-R BT.709",
... colourspaces=["ACEScg", "S-Gamut"],
... show_pointer_gamut=True,
... )

3.20.9 Colour Rendering Index#
>>> plot_single_sd_colour_rendering_index_bars(
... colour.SDS_ILLUMINANTS["FL2"]
... )

3.20.10 ANSI/IES TM-30-18 Colour Rendition Report#
>>> plot_single_sd_colour_rendition_report(colour.SDS_ILLUMINANTS["FL2"])

3.20.11 Gamut Section#
>>> plot_visible_spectrum_section(
... section_colours="RGB", section_opacity=0.15
... )

>>> plot_RGB_colourspace_section(
... "sRGB", section_colours="RGB", section_opacity=0.15
... )

3.20.12 Colour Temperature#
>>> plot_planckian_locus_in_chromaticity_diagram_CIE1960UCS(
... ["A", "B", "C"]
... )

4 User Guide#
User Guide#
The user guide provides an overview of Colour and explains important concepts and features, details can be found in the API Reference.
Tutorial#
Note
An interactive version of the tutorial is available via Google Colab.
Colour spreads over various domains of Colour Science, from colour models to optical phenomena, this tutorial does not give a complete overview of the API but is a good introduction to the main concepts.
Note
A directory with examples is available at this path in Colour installation: colour/examples. It can also be explored directly on Github.
from colour.plotting import *
colour_style()
plot_visible_spectrum()

Overview#
Colour is organised around various sub-packages:
adaptation: Chromatic adaptation models and transformations.
algebra: Algebra utilities.
appearance: Colour appearance models.
biochemistry: Biochemistry computations.
blindness: Colour vision deficiency models.
characterisation: Colour correction, camera and display characterisation.
colorimetry: Core objects for colour computations.
constants: CIE and CODATA constants.
continuous: Base objects for continuous data representation.
contrast: Objects for contrast sensitivity computation.
corresponding: Corresponding colour chromaticities computations.
difference: Colour difference computations.
geometry: Geometry primitives generation.
graph: Graph for automatic colour conversions.
hints: Type hints for annotations.
io: Input / output objects for reading and writing data.
models: Colour models.
notation: Colour notation systems.
phenomena: Computation of various optical phenomena.
plotting: Diagrams, figures, etc…
quality: Colour quality computation.
recovery: Reflectance recovery.
temperature: Colour temperature and correlated colour temperature computation.
utilities: Various utilities and data structures.
volume: Colourspace volumes computation and optimal colour stimuli.
Most of the public API is available from the root colour
namespace:
import colour
print(colour.__all__[:5] + ["..."])
['domain_range_scale', 'get_domain_range_scale', 'set_domain_range_scale', 'CHROMATIC_ADAPTATION_METHODS', 'CHROMATIC_ADAPTATION_TRANSFORMS', '...']
The various sub-packages also expose their public API:
from pprint import pprint
for sub_package in (
"adaptation",
"algebra",
"appearance",
"biochemistry",
"blindness",
"characterisation",
"colorimetry",
"constants",
"continuous",
"contrast",
"corresponding",
"difference",
"geometry",
"graph",
"hints",
"io",
"models",
"notation",
"phenomena",
"plotting",
"quality",
"recovery",
"temperature",
"utilities",
"volume",
):
print(sub_package.title())
pprint(getattr(colour, sub_package).__all__[:5] + ["..."])
print("\n")
Adaptation
['CHROMATIC_ADAPTATION_TRANSFORMS',
'CAT_BIANCO2010',
'CAT_BRADFORD',
'CAT_CAT02',
'CAT_CAT02_BRILL2008',
'...']
Algebra
['cartesian_to_spherical',
'spherical_to_cartesian',
'cartesian_to_polar',
'polar_to_cartesian',
'cartesian_to_cylindrical',
'...']
Appearance
['InductionFactors_Hunt',
'VIEWING_CONDITIONS_HUNT',
'CAM_Specification_Hunt',
'XYZ_to_Hunt',
'CAM_Specification_ATD95',
'...']
Biochemistry
['REACTION_RATE_MICHAELISMENTEN_METHODS',
'reaction_rate_MichaelisMenten',
'SUBSTRATE_CONCENTRATION_MICHAELISMENTEN_METHODS',
'substrate_concentration_MichaelisMenten',
'reaction_rate_MichaelisMenten_Michaelis1913',
'...']
Blindness
['CVD_MATRICES_MACHADO2010',
'msds_cmfs_anomalous_trichromacy_Machado2009',
'matrix_anomalous_trichromacy_Machado2009',
'matrix_cvd_Machado2009',
'...']
Characterisation
['RGB_CameraSensitivities',
'RGB_DisplayPrimaries',
'MSDS_ACES_RICD',
'MSDS_CAMERA_SENSITIVITIES',
'CCS_COLOURCHECKERS',
'...']
Colorimetry
['SpectralShape',
'SPECTRAL_SHAPE_DEFAULT',
'SpectralDistribution',
'MultiSpectralDistributions',
'reshape_sd',
'...']
Constants
['CONSTANT_K_M',
'CONSTANT_KP_M',
'CONSTANT_AVOGADRO',
'CONSTANT_BOLTZMANN',
'CONSTANT_LIGHT_SPEED',
'...']
Continuous
['AbstractContinuousFunction', 'Signal', 'MultiSignals', '...']
Contrast
['optical_MTF_Barten1999',
'pupil_diameter_Barten1999',
'sigma_Barten1999',
'retinal_illuminance_Barten1999',
'maximum_angular_size_Barten1999',
'...']
Corresponding
['BRENEMAN_EXPERIMENTS',
'BRENEMAN_EXPERIMENT_PRIMARIES_CHROMATICITIES',
'CorrespondingColourDataset',
'CorrespondingChromaticitiesPrediction',
'corresponding_chromaticities_prediction_CIE1994',
'...']
Difference
['delta_E_CAM02LCD',
'delta_E_CAM02SCD',
'delta_E_CAM02UCS',
'delta_E_CAM16LCD',
'delta_E_CAM16SCD',
'...']
Geometry
['PLANE_TO_AXIS_MAPPING',
'primitive_grid',
'primitive_cube',
'hull_section',
'PRIMITIVE_METHODS',
'...']
Graph
['CONVERSION_GRAPH',
'CONVERSION_GRAPH_NODE_LABELS',
'describe_conversion_path',
'convert',
'...']
Hints
['Any', 'Callable', 'Dict', 'Generator', 'Iterable', '...']
Io
['LUT1D',
'LUT3x1D',
'LUT3D',
'LUT_to_LUT',
'AbstractLUTSequenceOperator',
'...']
Models
['Jab_to_JCh',
'JCh_to_Jab',
'COLOURSPACE_MODELS',
'COLOURSPACE_MODELS_AXIS_LABELS',
'COLOURSPACE_MODELS_DOMAIN_RANGE_SCALE_1_TO_REFERENCE',
'...']
Notation
['MUNSELL_COLOURS_ALL',
'MUNSELL_COLOURS_1929',
'MUNSELL_COLOURS_REAL',
'MUNSELL_COLOURS',
'munsell_value',
'...']
Phenomena
['scattering_cross_section',
'rayleigh_optical_depth',
'rayleigh_scattering',
'sd_rayleigh_scattering',
'...']
Plotting
['SD_ASTMG173_ETR',
'SD_ASTMG173_GLOBAL_TILT',
'SD_ASTMG173_DIRECT_CIRCUMSOLAR',
'CONSTANTS_COLOUR_STYLE',
'CONSTANTS_ARROW_STYLE',
'...']
Quality
['SDS_TCS',
'SDS_VS',
'ColourRendering_Specification_CIE2017',
'colour_fidelity_index_CIE2017',
'ColourQuality_Specification_ANSIIESTM3018',
'...']
Recovery
['SPECTRAL_SHAPE_sRGB_MALLETT2019',
'MSDS_BASIS_FUNCTIONS_sRGB_MALLETT2019',
'SPECTRAL_SHAPE_OTSU2018',
'BASIS_FUNCTIONS_OTSU2018',
'CLUSTER_MEANS_OTSU2018',
'...']
Temperature
['xy_to_CCT_CIE_D',
'CCT_to_xy_CIE_D',
'xy_to_CCT_Hernandez1999',
'CCT_to_xy_Hernandez1999',
'xy_to_CCT_Kang2002',
'...']
Utilities
['Lookup',
'Structure',
'CanonicalMapping',
'LazyCanonicalMapping',
'Node',
'...']
Volume
['OPTIMAL_COLOUR_STIMULI_ILLUMINANTS',
'is_within_macadam_limits',
'is_within_mesh_volume',
'is_within_pointer_gamut',
'generate_pulse_waves',
'...']
The codebase is documented and most docstrings have usage examples:
print(colour.temperature.CCT_to_uv_Ohno2013.__doc__)
Return the *CIE UCS* colourspace *uv* chromaticity coordinates from given
correlated colour temperature :math:`T_{cp}`, :math:`\Delta_{uv}` and
colour matching functions using *Ohno (2013)* method.
Parameters
----------
CCT_D_uv
Correlated colour temperature :math:`T_{cp}`, :math:`\Delta_{uv}`.
cmfs
Standard observer colour matching functions, default to the
*CIE 1931 2 Degree Standard Observer*.
Returns
-------
:class:`numpy.ndarray`
*CIE UCS* colourspace *uv* chromaticity coordinates.
References
----------
:cite:`Ohno2014a`
Examples
--------
>>> from pprint import pprint
>>> from colour import MSDS_CMFS, SPECTRAL_SHAPE_DEFAULT
>>> cmfs = (
... MSDS_CMFS["CIE 1931 2 Degree Standard Observer"]
... .copy()
... .align(SPECTRAL_SHAPE_DEFAULT)
... )
>>> CCT_D_uv = np.array([6507.4342201047066, 0.003223690901513])
>>> CCT_to_uv_Ohno2013(CCT_D_uv, cmfs) # doctest: +ELLIPSIS
array([ 0.1977999..., 0.3122004...])
At the core of Colour is the colour.colorimetry
sub-package, it defines
the objects needed for spectral computations and many others:
pprint(colour.colorimetry.__all__)
['SpectralShape',
'SPECTRAL_SHAPE_DEFAULT',
'SpectralDistribution',
'MultiSpectralDistributions',
'reshape_sd',
'reshape_msds',
'sds_and_msds_to_sds',
'sds_and_msds_to_msds',
'sd_blackbody',
'blackbody_spectral_radiance',
'planck_law',
'LMS_ConeFundamentals',
'RGB_ColourMatchingFunctions',
'XYZ_ColourMatchingFunctions',
'CCS_ILLUMINANTS',
'MSDS_CMFS',
'MSDS_CMFS_LMS',
'MSDS_CMFS_RGB',
'MSDS_CMFS_STANDARD_OBSERVER',
'SDS_BASIS_FUNCTIONS_CIE_ILLUMINANT_D_SERIES',
'SDS_ILLUMINANTS',
'SDS_LEFS',
'SDS_LEFS_PHOTOPIC',
'SDS_LEFS_SCOTOPIC',
'TVS_ILLUMINANTS',
'TVS_ILLUMINANTS_HUNTERLAB',
'CCS_LIGHT_SOURCES',
'SDS_LIGHT_SOURCES',
'sd_constant',
'sd_zeros',
'sd_ones',
'msds_constant',
'msds_zeros',
'msds_ones',
'SD_GAUSSIAN_METHODS',
'sd_gaussian',
'sd_gaussian_normal',
'sd_gaussian_fwhm',
'SD_SINGLE_LED_METHODS',
'sd_single_led',
'sd_single_led_Ohno2005',
'SD_MULTI_LEDS_METHODS',
'sd_multi_leds',
'sd_multi_leds_Ohno2005',
'SD_TO_XYZ_METHODS',
'MSDS_TO_XYZ_METHODS',
'sd_to_XYZ',
'msds_to_XYZ',
'SPECTRAL_SHAPE_ASTME308',
'handle_spectral_arguments',
'lagrange_coefficients_ASTME2022',
'tristimulus_weighting_factors_ASTME2022',
'adjust_tristimulus_weighting_factors_ASTME308',
'sd_to_XYZ_integration',
'sd_to_XYZ_tristimulus_weighting_factors_ASTME308',
'sd_to_XYZ_ASTME308',
'msds_to_XYZ_integration',
'msds_to_XYZ_ASTME308',
'wavelength_to_XYZ',
'spectral_uniformity',
'BANDPASS_CORRECTION_METHODS',
'bandpass_correction',
'bandpass_correction_Stearns1988',
'sd_CIE_standard_illuminant_A',
'sd_CIE_illuminant_D_series',
'daylight_locus_function',
'sd_mesopic_luminous_efficiency_function',
'mesopic_weighting_function',
'LIGHTNESS_METHODS',
'lightness',
'lightness_Glasser1958',
'lightness_Wyszecki1963',
'lightness_CIE1976',
'lightness_Fairchild2010',
'lightness_Fairchild2011',
'lightness_Abebe2017',
'intermediate_lightness_function_CIE1976',
'LUMINANCE_METHODS',
'luminance',
'luminance_Newhall1943',
'luminance_ASTMD1535',
'luminance_CIE1976',
'luminance_Fairchild2010',
'luminance_Fairchild2011',
'luminance_Abebe2017',
'intermediate_luminance_function_CIE1976',
'dominant_wavelength',
'complementary_wavelength',
'excitation_purity',
'colorimetric_purity',
'luminous_flux',
'luminous_efficiency',
'luminous_efficacy',
'RGB_10_degree_cmfs_to_LMS_10_degree_cmfs',
'RGB_2_degree_cmfs_to_XYZ_2_degree_cmfs',
'RGB_10_degree_cmfs_to_XYZ_10_degree_cmfs',
'LMS_2_degree_cmfs_to_XYZ_2_degree_cmfs',
'LMS_10_degree_cmfs_to_XYZ_10_degree_cmfs',
'WHITENESS_METHODS',
'whiteness',
'whiteness_Berger1959',
'whiteness_Taube1960',
'whiteness_Stensby1968',
'whiteness_ASTME313',
'whiteness_Ganz1979',
'whiteness_CIE2004',
'YELLOWNESS_METHODS',
'yellowness',
'yellowness_ASTMD1925',
'yellowness_ASTME313_alternative',
'YELLOWNESS_COEFFICIENTS_ASTME313',
'yellowness_ASTME313']
Colour computations leverage a comprehensive quantity of datasets available
in most sub-packages, for example the colour.colorimetry.datasets
defines
the following components:
pprint(colour.colorimetry.datasets.__all__)
['MSDS_CMFS',
'MSDS_CMFS_LMS',
'MSDS_CMFS_RGB',
'MSDS_CMFS_STANDARD_OBSERVER',
'CCS_ILLUMINANTS',
'SDS_BASIS_FUNCTIONS_CIE_ILLUMINANT_D_SERIES',
'TVS_ILLUMINANTS_HUNTERLAB',
'SDS_ILLUMINANTS',
'TVS_ILLUMINANTS',
'CCS_LIGHT_SOURCES',
'SDS_LIGHT_SOURCES',
'SDS_LEFS',
'SDS_LEFS_PHOTOPIC',
'SDS_LEFS_SCOTOPIC']
From Spectral Distribution#
Whether it be a sample spectral distribution, colour matching functions or
illuminants, spectral data is manipulated using an object built with the
colour.SpectralDistribution
class or based on it:
# Defining a sample spectral distribution data.
data_sample = {
380: 0.048,
385: 0.051,
390: 0.055,
395: 0.060,
400: 0.065,
405: 0.068,
410: 0.068,
415: 0.067,
420: 0.064,
425: 0.062,
430: 0.059,
435: 0.057,
440: 0.055,
445: 0.054,
450: 0.053,
455: 0.053,
460: 0.052,
465: 0.052,
470: 0.052,
475: 0.053,
480: 0.054,
485: 0.055,
490: 0.057,
495: 0.059,
500: 0.061,
505: 0.062,
510: 0.065,
515: 0.067,
520: 0.070,
525: 0.072,
530: 0.074,
535: 0.075,
540: 0.076,
545: 0.078,
550: 0.079,
555: 0.082,
560: 0.087,
565: 0.092,
570: 0.100,
575: 0.107,
580: 0.115,
585: 0.122,
590: 0.129,
595: 0.134,
600: 0.138,
605: 0.142,
610: 0.146,
615: 0.150,
620: 0.154,
625: 0.158,
630: 0.163,
635: 0.167,
640: 0.173,
645: 0.180,
650: 0.188,
655: 0.196,
660: 0.204,
665: 0.213,
670: 0.222,
675: 0.231,
680: 0.242,
685: 0.251,
690: 0.261,
695: 0.271,
700: 0.282,
705: 0.294,
710: 0.305,
715: 0.318,
720: 0.334,
725: 0.354,
730: 0.372,
735: 0.392,
740: 0.409,
745: 0.420,
750: 0.436,
755: 0.450,
760: 0.462,
765: 0.465,
770: 0.448,
775: 0.432,
780: 0.421,
}
sd = colour.SpectralDistribution(data_sample, name="Sample")
print(repr(sd))
SpectralDistribution([[ 3.80000000e+02, 4.80000000e-02],
[ 3.85000000e+02, 5.10000000e-02],
[ 3.90000000e+02, 5.50000000e-02],
[ 3.95000000e+02, 6.00000000e-02],
[ 4.00000000e+02, 6.50000000e-02],
[ 4.05000000e+02, 6.80000000e-02],
[ 4.10000000e+02, 6.80000000e-02],
[ 4.15000000e+02, 6.70000000e-02],
[ 4.20000000e+02, 6.40000000e-02],
[ 4.25000000e+02, 6.20000000e-02],
[ 4.30000000e+02, 5.90000000e-02],
[ 4.35000000e+02, 5.70000000e-02],
[ 4.40000000e+02, 5.50000000e-02],
[ 4.45000000e+02, 5.40000000e-02],
[ 4.50000000e+02, 5.30000000e-02],
[ 4.55000000e+02, 5.30000000e-02],
[ 4.60000000e+02, 5.20000000e-02],
[ 4.65000000e+02, 5.20000000e-02],
[ 4.70000000e+02, 5.20000000e-02],
[ 4.75000000e+02, 5.30000000e-02],
[ 4.80000000e+02, 5.40000000e-02],
[ 4.85000000e+02, 5.50000000e-02],
[ 4.90000000e+02, 5.70000000e-02],
[ 4.95000000e+02, 5.90000000e-02],
[ 5.00000000e+02, 6.10000000e-02],
[ 5.05000000e+02, 6.20000000e-02],
[ 5.10000000e+02, 6.50000000e-02],
[ 5.15000000e+02, 6.70000000e-02],
[ 5.20000000e+02, 7.00000000e-02],
[ 5.25000000e+02, 7.20000000e-02],
[ 5.30000000e+02, 7.40000000e-02],
[ 5.35000000e+02, 7.50000000e-02],
[ 5.40000000e+02, 7.60000000e-02],
[ 5.45000000e+02, 7.80000000e-02],
[ 5.50000000e+02, 7.90000000e-02],
[ 5.55000000e+02, 8.20000000e-02],
[ 5.60000000e+02, 8.70000000e-02],
[ 5.65000000e+02, 9.20000000e-02],
[ 5.70000000e+02, 1.00000000e-01],
[ 5.75000000e+02, 1.07000000e-01],
[ 5.80000000e+02, 1.15000000e-01],
[ 5.85000000e+02, 1.22000000e-01],
[ 5.90000000e+02, 1.29000000e-01],
[ 5.95000000e+02, 1.34000000e-01],
[ 6.00000000e+02, 1.38000000e-01],
[ 6.05000000e+02, 1.42000000e-01],
[ 6.10000000e+02, 1.46000000e-01],
[ 6.15000000e+02, 1.50000000e-01],
[ 6.20000000e+02, 1.54000000e-01],
[ 6.25000000e+02, 1.58000000e-01],
[ 6.30000000e+02, 1.63000000e-01],
[ 6.35000000e+02, 1.67000000e-01],
[ 6.40000000e+02, 1.73000000e-01],
[ 6.45000000e+02, 1.80000000e-01],
[ 6.50000000e+02, 1.88000000e-01],
[ 6.55000000e+02, 1.96000000e-01],
[ 6.60000000e+02, 2.04000000e-01],
[ 6.65000000e+02, 2.13000000e-01],
[ 6.70000000e+02, 2.22000000e-01],
[ 6.75000000e+02, 2.31000000e-01],
[ 6.80000000e+02, 2.42000000e-01],
[ 6.85000000e+02, 2.51000000e-01],
[ 6.90000000e+02, 2.61000000e-01],
[ 6.95000000e+02, 2.71000000e-01],
[ 7.00000000e+02, 2.82000000e-01],
[ 7.05000000e+02, 2.94000000e-01],
[ 7.10000000e+02, 3.05000000e-01],
[ 7.15000000e+02, 3.18000000e-01],
[ 7.20000000e+02, 3.34000000e-01],
[ 7.25000000e+02, 3.54000000e-01],
[ 7.30000000e+02, 3.72000000e-01],
[ 7.35000000e+02, 3.92000000e-01],
[ 7.40000000e+02, 4.09000000e-01],
[ 7.45000000e+02, 4.20000000e-01],
[ 7.50000000e+02, 4.36000000e-01],
[ 7.55000000e+02, 4.50000000e-01],
[ 7.60000000e+02, 4.62000000e-01],
[ 7.65000000e+02, 4.65000000e-01],
[ 7.70000000e+02, 4.48000000e-01],
[ 7.75000000e+02, 4.32000000e-01],
[ 7.80000000e+02, 4.21000000e-01]],
interpolator=SpragueInterpolator,
interpolator_args={},
extrapolator=Extrapolator,
extrapolator_args={u'right': None, u'method': u'Constant', u'left': None})
The sample spectral distribution can be easily plotted against the visible spectrum:
# Plotting the sample spectral distribution.
plot_single_sd(sd)

With the sample spectral distribution defined, its shape is retrieved as follows:
# Displaying the sample spectral distribution shape.
print(sd.shape)
(380.0, 780.0, 5.0)
The returned shape is an instance of the colour.SpectralShape
class:
repr(sd.shape)
'SpectralShape(380.0, 780.0, 5.0)'
The colour.SpectralShape
class is used throughout Colour to define
spectral dimensions and is instantiated as follows:
# Using *colour.SpectralShape* with iteration.
shape = colour.SpectralShape(start=0, end=10, interval=1)
for wavelength in shape:
print(wavelength)
# *colour.SpectralShape.wavelengths* property is providing the complete
# range of wavelengths.
shape = colour.SpectralShape(0, 10, 0.5)
shape.wavelengths
0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
9.0
10.0
array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. ,
4.5, 5. , 5.5, 6. , 6.5, 7. , 7.5, 8. , 8.5,
9. , 9.5, 10. ])
Colour defines three convenient objects to create constant spectral distributions:
colour.sd_constant
colour.sd_zeros
colour.sd_ones
# Defining a constant spectral distribution.
sd_constant = colour.sd_constant(100)
print('"Constant Spectral Distribution"')
print(sd_constant.shape)
print(sd_constant[400])
# Defining a zeros filled spectral distribution.
print('\n"Zeros Filled Spectral Distribution"')
sd_zeros = colour.sd_zeros()
print(sd_zeros.shape)
print(sd_zeros[400])
# Defining a ones filled spectral distribution.
print('\n"Ones Filled Spectral Distribution"')
sd_ones = colour.sd_ones()
print(sd_ones.shape)
print(sd_ones[400])
"Constant Spectral Distribution"
(360.0, 780.0, 1.0)
100.0
"Zeros Filled Spectral Distribution"
(360.0, 780.0, 1.0)
0.0
"Ones Filled Spectral Distribution"
(360.0, 780.0, 1.0)
1.0
By default the shape used by colour.sd_constant
,
colour.sd_zeros
and colour.sd_ones
is the one defined by the
colour.SPECTRAL_SHAPE_DEFAULT
attribute and based on ASTM E308-15
practise shape.
print(repr(colour.SPECTRAL_SHAPE_DEFAULT))
SpectralShape(360, 780, 1)
A custom shape can be passed to construct a constant spectral distribution with user defined dimensions:
colour.sd_ones(colour.SpectralShape(400, 700, 5))[450]
1.0
The colour.SpectralDistribution
class supports the following
arithmetical operations:
addition
subtraction
multiplication
division
exponentiation
sd1 = colour.sd_ones()
print('"Ones Filled Spectral Distribution"')
print(sd1[400])
print('\n"x2 Constant Multiplied"')
print((sd1 * 2)[400])
print('\n"+ Spectral Distribution"')
print((sd1 + colour.sd_ones())[400])
"Ones Filled Spectral Distribution"
1.0
"x2 Constant Multiplied"
2.0
"+ Spectral Distribution"
2.0
Often interpolation of the spectral distribution is required, this is achieved
with the colour.SpectralDistribution.interpolate
method. Depending on the
wavelengths uniformity, the default interpolation method will differ.
Following CIE 167:2005 recommendation: The method developed by
Sprague (1880) should be used for interpolating functions having a uniformly
spaced independent variable and a Cubic Spline method for non-uniformly spaced
independent variable [CIET13805a].
The uniformity of the sample spectral distribution is assessed as follows:
# Checking the sample spectral distribution uniformity.
print(sd.is_uniform())
True
In this case, since the sample spectral distribution is uniform the
interpolation defaults to the colour.SpragueInterpolator
interpolator.
Note
Interpolation happens in place and may alter the original data, use the
colour.SpectralDistribution.copy
method to generate a copy of the
spectral distribution before interpolation.
# Copying the sample spectral distribution.
sd_copy = sd.copy()
# Interpolating the copied sample spectral distribution.
sd_copy.interpolate(colour.SpectralShape(400, 770, 1))
sd_copy[401]
0.065809599999999996
# Comparing the interpolated spectral distribution with the original one.
plot_multi_sds([sd, sd_copy], bounding_box=[730, 780, 0.25, 0.5])

Extrapolation although dangerous can be used to help aligning two spectral distributions together. CIE publication CIE 15:2004 “Colorimetry” recommends that unmeasured values may be set equal to the nearest measured value of the appropriate quantity in truncation [CIET14804f]:
# Extrapolating the copied sample spectral distribution.
sd_copy.extrapolate(colour.SpectralShape(340, 830, 1))
sd_copy[340], sd_copy[830]
(0.065000000000000002, 0.44800000000000018)
The underlying interpolator can be swapped for any of the Colour interpolators:
pprint(
[
export
for export in colour.algebra.interpolation.__all__
if "Interpolator" in export
]
)
[u'KernelInterpolator',
u'LinearInterpolator',
u'SpragueInterpolator',
u'CubicSplineInterpolator',
u'PchipInterpolator',
u'NullInterpolator']
# Changing interpolator while trimming the copied spectral distribution.
sd_copy.interpolate(
colour.SpectralShape(400, 700, 10),
interpolator=colour.LinearInterpolator,
)
SpectralDistribution([[ 4.00000000e+02, 6.50000000e-02],
[ 4.10000000e+02, 6.80000000e-02],
[ 4.20000000e+02, 6.40000000e-02],
[ 4.30000000e+02, 5.90000000e-02],
[ 4.40000000e+02, 5.50000000e-02],
[ 4.50000000e+02, 5.30000000e-02],
[ 4.60000000e+02, 5.20000000e-02],
[ 4.70000000e+02, 5.20000000e-02],
[ 4.80000000e+02, 5.40000000e-02],
[ 4.90000000e+02, 5.70000000e-02],
[ 5.00000000e+02, 6.10000000e-02],
[ 5.10000000e+02, 6.50000000e-02],
[ 5.20000000e+02, 7.00000000e-02],
[ 5.30000000e+02, 7.40000000e-02],
[ 5.40000000e+02, 7.60000000e-02],
[ 5.50000000e+02, 7.90000000e-02],
[ 5.60000000e+02, 8.70000000e-02],
[ 5.70000000e+02, 1.00000000e-01],
[ 5.80000000e+02, 1.15000000e-01],
[ 5.90000000e+02, 1.29000000e-01],
[ 6.00000000e+02, 1.38000000e-01],
[ 6.10000000e+02, 1.46000000e-01],
[ 6.20000000e+02, 1.54000000e-01],
[ 6.30000000e+02, 1.63000000e-01],
[ 6.40000000e+02, 1.73000000e-01],
[ 6.50000000e+02, 1.88000000e-01],
[ 6.60000000e+02, 2.04000000e-01],
[ 6.70000000e+02, 2.22000000e-01],
[ 6.80000000e+02, 2.42000000e-01],
[ 6.90000000e+02, 2.61000000e-01],
[ 7.00000000e+02, 2.82000000e-01]],
interpolator=SpragueInterpolator,
interpolator_args={},
extrapolator=Extrapolator,
extrapolator_args={u'right': None, u'method': u'Constant', u'left': None})
The extrapolation behaviour can be changed for Linear
method instead
of the Constant
default method or even use arbitrary constant left
and right
values:
# Extrapolating the copied sample spectral distribution with *Linear* method.
sd_copy.extrapolate(
colour.SpectralShape(340, 830, 1),
extrapolator_kwargs={"method": "Linear", "right": 0},
)
sd_copy[340], sd_copy[830]
(0.046999999999999348, 0.0)
Aligning a spectral distribution is a convenient way to first interpolates the current data within its original bounds, then, if required, extrapolate any missing values to match the requested shape:
# Aligning the cloned sample spectral distribution.
# The spectral distribution is first trimmed as above.
sd_copy.interpolate(colour.SpectralShape(400, 700, 1))
sd_copy.align(colour.SpectralShape(340, 830, 5))
sd_copy[340], sd_copy[830]
(0.065000000000000002, 0.28199999999999975)
The colour.SpectralDistribution
class also supports various arithmetic
operations like addition, subtraction, multiplication, division or
exponentiation with numeric and array_like variables or other
colour.SpectralDistribution
class instances:
sd = colour.SpectralDistribution(
{
410: 0.25,
420: 0.50,
430: 0.75,
440: 1.0,
450: 0.75,
460: 0.50,
480: 0.25,
}
)
print((sd.copy() + 1).values)
print((sd.copy() * 2).values)
print((sd * [0.35, 1.55, 0.75, 2.55, 0.95, 0.65, 0.15]).values)
print(
(
sd
* colour.sd_constant(2, sd.shape)
* colour.sd_constant(3, sd.shape)
).values
)
[ 1.25 1.5 1.75 2. 1.75 1.5 1.25]
[ 0.5 1. 1.5 2. 1.5 1. 0.5]
[ 0.0875 0.775 0.5625 2.55 0.7125 0.325 0.0375]
[ 1.5 3. 4.5 6. 4.5 3. nan 1.5]
The spectral distribution can be normalised with an arbitrary factor:
print(sd.normalise().values)
print(sd.normalise(100).values)
[ 0.25 0.5 0.75 1. 0.75 0.5 0.25]
[ 25. 50. 75. 100. 75. 50. 25.]
A the heart of the colour.SpectralDistribution
class is the
colour.continuous.Signal
class which implements the
colour.continuous.Signal.function
method.
Evaluating the function for any independent domain \(x \in \mathbb{R}\) variable returns a corresponding range \(y \in \mathbb{R}\) variable.
It adopts an interpolating function encapsulated inside an extrapolating
function. The resulting function independent domain, stored as discrete
values in the colour.continuous.Signal.domain
attribute corresponds
with the function dependent and already known range stored in the
colour.continuous.Signal.range
attribute.
Describing the colour.continuous.Signal
class is beyond the scope of
this tutorial but the core capability can be described.
import numpy as np
range_ = np.linspace(10, 100, 10)
signal = colour.continuous.Signal(range_)
print(repr(signal))
Signal([[ 0., 10.],
[ 1., 20.],
[ 2., 30.],
[ 3., 40.],
[ 4., 50.],
[ 5., 60.],
[ 6., 70.],
[ 7., 80.],
[ 8., 90.],
[ 9., 100.]],
interpolator=KernelInterpolator,
interpolator_kwargs={},
extrapolator=Extrapolator,
extrapolator_kwargs={u'right': nan, u'method': u'Constant', u'left': nan})
# Returning the corresponding range *y* variable for any arbitrary independent domain *x* variable.
signal[np.random.uniform(0, 9, 10)]
array([ 55.91309735, 65.4172615 , 65.54495059, 88.17819416,
61.88860248, 10.53878826, 55.25130534, 46.14659783,
86.41406136, 84.59897703])
Convert to Tristimulus Values#
From a given spectral distribution, CIE XYZ tristimulus values can be calculated:
sd = colour.SpectralDistribution(data_sample)
cmfs = colour.MSDS_CMFS["CIE 1931 2 Degree Standard Observer"]
illuminant = colour.SDS_ILLUMINANTS["D65"]
# Calculating the sample spectral distribution *CIE XYZ* tristimulus values.
XYZ = colour.sd_to_XYZ(sd, cmfs, illuminant)
print(XYZ)
[ 10.97085572 9.70278591 6.05562778]
From CIE XYZ Colourspace#
CIE XYZ is the central colourspace for Colour Science from which many computations are available, expanding to even more computations:
# Displaying objects interacting directly with the *CIE XYZ* colourspace.
pprint(colour.COLOURSPACE_MODELS)
('CAM02LCD',
'CAM02SCD',
'CAM02UCS',
'CAM16LCD',
'CAM16SCD',
'CAM16UCS',
'CIE XYZ',
'CIE xyY',
'CIE Lab',
'CIE LCHab',
'CIE Luv',
'CIE Luv uv',
'CIE LCHuv',
'CIE UCS',
'CIE UCS uv',
'CIE UVW',
'DIN99',
'Hunter Lab',
'Hunter Rdab',
'ICtCp',
'IPT',
'IgPgTg',
'Jzazbz',
'OSA UCS',
'Oklab',
'hdr-CIELAB',
'hdr-IPT')
Convert to Display Colours#
CIE XYZ tristimulus values can be converted into sRGB colourspace RGB values in order to display them on screen:
# The output domain of *colour.sd_to_XYZ* is [0, 100] and the input
# domain of *colour.XYZ_to_sRGB* is [0, 1]. It needs to be accounted for,
# thus the input *CIE XYZ* tristimulus values are scaled.
RGB = colour.XYZ_to_sRGB(XYZ / 100)
print(RGB)
[ 0.45675795 0.30986982 0.24861924]
# Plotting the *sRGB* colourspace colour of the *Sample* spectral distribution.
plot_single_colour_swatch(
ColourSwatch(RGB, "Sample"), text_kwargs={"size": "x-large"}
)

Generate Colour Rendition Charts#
Likewise, colour values from a colour rendition chart sample can be computed.
Note
This is useful for render time checks in the VFX industry, where a synthetic colour chart can be inserted into a render to ensure the colour management is acting as expected.
The colour.characterisation
sub-package contains the dataset for
various colour rendition charts:
# Colour rendition charts chromaticity coordinates.
print(sorted(colour.characterisation.CCS_COLOURCHECKERS.keys()))
# Colour rendition charts spectral distributions.
print(sorted(colour.characterisation.SDS_COLOURCHECKERS.keys()))
['BabelColor Average', 'ColorChecker 1976', 'ColorChecker 2005', 'ColorChecker24 - After November 2014', 'ColorChecker24 - Before November 2014', 'babel_average', 'cc2005', 'cca2014', 'ccb2014']
['BabelColor Average', 'ColorChecker N Ohta', 'babel_average', 'cc_ohta']
Note
The above cc2005
, babel_average
and cc_ohta
keys are
convenient aliases for respectively ColorChecker 2005
, BabelColor Average
and ColorChecker N Ohta
keys.
# Plotting the *sRGB* colourspace colour of *neutral 5 (.70 D)* patch.
patch_name = "neutral 5 (.70 D)"
patch_sd = colour.SDS_COLOURCHECKERS["ColorChecker N Ohta"][patch_name]
XYZ = colour.sd_to_XYZ(patch_sd, cmfs, illuminant)
RGB = colour.XYZ_to_sRGB(XYZ / 100)
plot_single_colour_swatch(
ColourSwatch(RGB, patch_name.title()), text_kwargs={"size": "x-large"}
)

Colour defines a convenient plotting object to draw synthetic colour rendition charts figures:
plot_single_colour_checker(
colour_checker="ColorChecker 2005", text_kwargs={"visible": False}
)

Convert to Chromaticity Coordinates#
Given a spectral distribution, chromaticity coordinates CIE xy can be computed
using the colour.XYZ_to_xy
definition:
# Computing *CIE xy* chromaticity coordinates for the *neutral 5 (.70 D)* patch.
xy = colour.XYZ_to_xy(XYZ)
print(xy)
[ 0.31259787 0.32870029]
Chromaticity coordinates CIE xy can be plotted into the CIE 1931 Chromaticity Diagram:
import matplotlib.pyplot as plt
# Plotting the *CIE 1931 Chromaticity Diagram*.
# The argument *standalone=False* is passed so that the plot doesn't get
# displayed and can be used as a basis for other plots.
plot_chromaticity_diagram_CIE1931(standalone=False)
# Plotting the *CIE xy* chromaticity coordinates.
x, y = xy
plt.plot(x, y, "o-", color="white")
# Annotating the plot.
plt.annotate(
patch_sd.name.title(),
xy=xy,
xytext=(-50, 30),
textcoords="offset points",
arrowprops=dict(arrowstyle="->", connectionstyle="arc3, rad=-0.2"),
)
# Displaying the plot.
render(
standalone=True,
limits=(-0.1, 0.9, -0.1, 0.9),
x_tighten=True,
y_tighten=True,
)

See More#
The Basic Concepts page puts an emphasis on basic but important to understand concepts of Colour.
The Advanced Concepts page describes some advanced usage scenarios of Colour.
The Google Colab How-To guide for Colour shows various techniques to solve specific problems and highlights some interesting use cases.
How-To#
The Google Colab How-To guide for Colour shows various techniques to solve specific problems and highlights some interesting use cases.
Basic Concepts#
This page puts an emphasis on basic concepts of Colour, those are important to understand.
Object Name Categorisation#
The API tries to bundle the objects by categories by naming them with common prefixes which makes introspection and auto-completion easier.
For example, in IPython or Jupyter Notebook, most of the definitions pertaining to the spectral distribution handling can be found as follows:
In [1]: import colour
In [2]: colour.sd_
sd_blackbody() sd_gaussian() sd_rayleigh_scattering() sd_zeros
sd_CIE_illuminant_D_series() sd_mesopic_luminous_efficiency_function() sd_single_led()
sd_CIE_standard_illuminant_A() sd_multi_leds() sd_to_aces_relative_exposure_values()
sd_constant() sd_ones() sd_to_XYZ
Likewise, for the spectral distribution handling related attributes:
In [2]: colour.SD
SD_GAUSSIAN_METHODS SD_TO_XYZ_METHODS SDS_ILLUMINANTS SDS_LIGHT_SOURCES
SD_MULTI_LEDS_METHODS SDS_COLOURCHECKERS SDS_LEFS
SD_SINGLE_LED_METHODS SDS_FILTERS SDS_LENSES
Similarly, all the RGB colourspaces can be individually accessed from the
colour.models
namespace:
In [2]: colour.models.RGB_COLOURSPACE
RGB_COLOURSPACE_ACES2065_1 RGB_COLOURSPACE_ACESPROXY RGB_COLOURSPACE_APPLE_RGB RGB_COLOURSPACE_BT470_525
RGB_COLOURSPACE_ACESCC RGB_COLOURSPACE_ADOBE_RGB1998 RGB_COLOURSPACE_BEST_RGB RGB_COLOURSPACE_BT470_625
RGB_COLOURSPACE_ACESCCT RGB_COLOURSPACE_ADOBE_WIDE_GAMUT_RGB RGB_COLOURSPACE_BETA_RGB RGB_COLOURSPACE_BT709 >
RGB_COLOURSPACE_ACESCG RGB_COLOURSPACE_ARRI_WIDE_GAMUT_3 RGB_COLOURSPACE_BT2020 RGB_COLOURSPACE_CIE_RGB
Abbreviations#
The following abbreviations are in use in Colour:
CAM : Colour Appearance Model
CCS : Chromaticity Coordinates
CCTF : Colour Component Transfer Function
CCT : Correlated Colour Temperature
CMY : Cyan, Magenta, Yellow
CMYK : Cyan, Magenta, Yellow, Black
CVD : Colour Vision Deficiency
CV : Code Value
EOTF : Electro-Optical Transfer Function
IDT : Input Device Transform
MSDS : Multi-Spectral Distributions
OETF : Optical-Electrical Transfer Function
OOTF : Optical-Optical Transfer Function
SD : Spectral Distribution
TVS : Tristimulus Values
N-Dimensional Array Support#
Most of Colour definitions are fully vectorised and support n-dimensional array by leveraging Numpy.
While it is recommended to use ndarray as input for the API objects, it is possible to use tuples or lists:
import colour
xyY = (0.4316, 0.3777, 0.1008)
colour.xyY_to_XYZ(xyY)
array([ 0.11518475, 0.1008 , 0.05089373])
xyY = [0.4316, 0.3777, 0.1008]
colour.xyY_to_XYZ(xyY)
array([ 0.11518475, 0.1008 , 0.05089373])
xyY = [
(0.4316, 0.3777, 0.1008),
(0.4316, 0.3777, 0.1008),
(0.4316, 0.3777, 0.1008),
]
colour.xyY_to_XYZ(xyY)
array([[ 0.11518475, 0.1008 , 0.05089373],
[ 0.11518475, 0.1008 , 0.05089373],
[ 0.11518475, 0.1008 , 0.05089373]])
As shown in the above example, there is widespread support for n-dimensional arrays:
import numpy as np
xyY = np.array([0.4316, 0.3777, 0.1008])
xyY = np.tile(xyY, (6, 1))
colour.xyY_to_XYZ(xyY)
array([[ 0.11518475, 0.1008 , 0.05089373],
[ 0.11518475, 0.1008 , 0.05089373],
[ 0.11518475, 0.1008 , 0.05089373],
[ 0.11518475, 0.1008 , 0.05089373],
[ 0.11518475, 0.1008 , 0.05089373],
[ 0.11518475, 0.1008 , 0.05089373]])
colour.xyY_to_XYZ(xyY.reshape([2, 3, 3]))
array([[[ 0.11518475, 0.1008 , 0.05089373],
[ 0.11518475, 0.1008 , 0.05089373],
[ 0.11518475, 0.1008 , 0.05089373]],
[[ 0.11518475, 0.1008 , 0.05089373],
[ 0.11518475, 0.1008 , 0.05089373],
[ 0.11518475, 0.1008 , 0.05089373]]])
Which enables image processing:
RGB = colour.read_image("_static/Logo_Small_001.png")
RGB = RGB[..., 0:3] # Discarding alpha channel.
XYZ = colour.sRGB_to_XYZ(RGB)
colour.plotting.plot_image(XYZ, text_kwargs={"text": "sRGB to XYZ"})

Spectral Representation and Continuous Signal#
Floating Point Wavelengths#
Colour current representation of spectral data is atypical and has been influenced by the failures and shortcomings of the previous implementation that required less than ideal code to support floating point wavelengths. Wavelengths should not have to be defined as integer values and it is effectively common to get data from instruments whose domain is returned as floating point values.
For example, the data from an Ocean Insight (Optics) STS-VIS spectrometer is typically saved with 3 digits decimal precision:
Data from Subt2_14-36-15-210.txt Node
Date: Sat Nov 17 14:36:15 NZDT 2018
User: kelsolaar
Spectrometer: S12286
Trigger mode: 0
Resolution mode: 1024 pixels
Integration Time (sec): 5.000000E0
Scans to average: 3
Nonlinearity correction enabled: true
Boxcar width: 3
Baseline correction enabled: true
XAxis mode: Wavelengths
Number of Pixels in Spectrum: 1024
# >>>>>Begin Spectral Data<<<<<
338.028 279.71
338.482 285.43
338.936 291.33
...
821.513 3112.65
822.008 3133.74
822.503 3107.11
A solution to the problem is to quantize the data at integer values but it is often non-desirable. The spectra representation implementation prior to Colour 0.3.11 was relying on a custom mutable mapping which was allowing to retrieve decimal keys within a given precision:
data_1 = {0.1999999998: "Nemo", 0.2000000000: "John"}
apm_1 = ArbitraryPrecisionMapping(data_1, key_decimals=10)
tuple(apm_1.keys())
(0.1999999998, 0.2)
apm_2 = ArbitraryPrecisionMapping(data_1, key_decimals=7)
tuple(apm_2.keys())
(0.2,)
While functional, the approach was brittle and not elegant which triggered a significant amount of rework.
Continuous Signal#
All the spectral distributions in Colour are instances of the
colour.SpectralDistribution
class (or its sub-classes), a sub-class of
the colour.continuous.Signal
class which is itself an implementation
of the colour.continuous.AbstractContinuousFunction
ABCMeta
class:

Likewise, the multi-spectral distributions are instances
colour.MultiSpectralDistributions
class (or its sub-classes), a
sub-class of the colour.continuous.MultiSignals
class which is a
container for multiple colour.continuous.Signal
sub-class instances
and also implements the colour.continuous.AbstractContinuousFunction
ABCMeta class.

The colour.continuous.Signal
class implements the
Signal.function()
method so that evaluating the function for any
independent domain \(x \in\mathbb{R}\) variable returns a corresponding
range \(y \in\mathbb{R}\) variable.
It adopts an interpolating function encapsulated inside an extrapolating
function. The resulting function independent domain, stored as discrete values
in the colour.continuous.Signal.domain
attribute corresponds with the
function dependent and already known range stored in the
colour.continuous.Signal.range
attribute.
Consequently, it is possible to get the value of a spectral distribution at any given wavelength:
data = {
500: 0.0651,
520: 0.0705,
540: 0.0772,
560: 0.0870,
580: 0.1128,
600: 0.1360,
}
sd = colour.SpectralDistribution(data)
sd[555.5]
0.083453673782958995
Getting, Setting, Indexing and Slicing#
Attention
Indexing a spectral distribution (or multi-spectral distribution) with a numeric (or a numeric sequence) returns the corresponding value(s). Indexing a spectral distribution (or multi-spectral distribution) with a slice returns the values for the corresponding wavelength indexes.
While it is tempting to think that the colour.SpectralDistribution
and colour.MultiSpectralDistributions
classes behave like Numpy’s
ndarray,
they do not entirely and some peculiarities exist that make them different.
An important difference lies in the behaviour with respect to getting and setting the values of the data.
Getting the value(s) for a single (or multiple wavelengths) is done by indexing
the colour.SpectralDistribution
(or
colour.MultiSpectralDistributions
) class with the a single numeric
or array of numeric wavelengths, e.g. sd[555.5]
or
sd[555.25, 555.25, 555.75]
.
However, if getting the values using a slice
class instance, e.g.
sd[0:3]
, the underlying discrete values for the indexes represented by the
slice
class instance are returned instead.
As shown in the previous section, getting the value of a wavelength is done as follows:
data = {
500: 0.0651,
520: 0.0705,
540: 0.0772,
560: 0.0870,
580: 0.1128,
600: 0.1360,
}
sd = colour.SpectralDistribution(data)
sd[555]
0.083135180664062502,
Multiple wavelength values can be retrieved as follows:
sd[(555.0, 556.25, 557.5, 558.75, 560.0)]
array([ 0.08313518, 0.08395997, 0.08488108, 0.085897 , 0.087 ])
However, slices will return the values for the corresponding wavelength indexes:
sd[0:3]
array([ 0.0651, 0.0705, 0.0772])
sd[:]
array([ 0.0651, 0.0705, 0.0772, 0.087 , 0.1128, 0.136 ])
Note
Indexing a multi-spectral distribution is achieved similarly, it can however be sliced along multiple axes because the data is2-dimensional, e.g. msds[0:3, 0:2].
A copy of the underlying colour.SpectralDistribution
and
colour.MultiSpectralDistributions
classes discretized data can be
accessed via the wavelengths
and values
properties. However, it cannot
be changed directly via the properties or slicing:
Attention
The data returned by the wavelengths
and values
properties is a
copy of the underlying colour.SpectralDistribution
and
colour.MultiSpectralDistributions
classes discretized data: It
can only be changed indirectly.
data = {
500: 0.0651,
520: 0.0705,
540: 0.0772,
560: 0.0870,
580: 0.1128,
600: 0.1360,
}
sd = colour.SpectralDistribution(data)
# Note: The wavelength 500nm is at index 0.
sd.values[0] = 0
sd[500]
0.065100000000000019
Instead, the values can be set indirectly:
values = sd.values
values[0] = 0
sd.values = values
sd.values
array([ 0. , 0.0705, 0.0772, 0.087 , 0.1128, 0.136 ])
Domain-Range Scales#
Note
This section contains important information.
Colour adopts 4 main input domains and output ranges:
Scalars usually in domain-range [0, 1] (or [0, 10] for Munsell Value).
Percentages usually in domain-range [0, 100].
Degrees usually in domain-range [0, 360].
Integers usually in domain-range [0, 2**n -1] where n is the bit depth.
It is error prone but it is also a direct consequence of the inconsistency of the colour science field itself. We have discussed at length about this and we were leaning toward normalisation of the whole API to domain-range [0, 1], we never committed for reasons highlighted by the following points:
Colour Scientist performing computations related to Munsell Renotation System would be very surprised if the output Munsell Value was in range [0, 1] or [0, 100].
A Visual Effect Industry artist would be astonished to find out that conversion from CIE XYZ to sRGB was yielding values in range [0, 100].
However benefits of having a consistent and predictable domain-range scale are numerous thus with Colour 0.3.12 we have introduced a mechanism to allow users to work within one of the two available domain-range scales.
Scale - Reference#
‘Reference’ is the default domain-range scale of Colour, objects adopt the implemented reference, i.e. paper, publication, etc.., domain-range scale.
The ‘Reference’ domain-range scale is inconsistent, e.g. colour appearance
models, spectral conversions are typically in domain-range [0, 100] while RGB
models will operate in domain-range [0, 1]. Some objects, e.g.
colour.colorimetry.lightness_Fairchild2011()
definition have mismatched
domain-range: input domain [0, 1] and output range [0, 100].
Scale - 1#
‘1’ is a domain-range scale converting all the relevant objects from Colour public API to domain-range [0, 1]:
Scalars in domain-range [0, 10], e.g Munsell Value are scaled by 10.
Percentages in domain-range [0, 100] are scaled by 100.
Degrees in domain-range [0, 360] are scaled by 360.
Integers in domain-range [0, 2**n -1] where n is the bit depth are scaled by 2**n -1.
Dimensionless values are unaffected and are indicated with DN.
Unaffected values are unaffected and are indicated with UN.
Warning
The conversion to ‘1’ domain-range scale is a soft normalisation and
similarly to the ‘Reference’ domain-range scale it is normal to
encounter values exceeding 1, e.g. High Dynamic Range Imagery (HDRI) or
negative values, e.g. out-of-gamut RGB colourspace values. Some definitions
such as colour.models.eotf_ST2084()
which decodes absolute luminance
values are not affected by any domain-range scales and are indicated with
UN.
Understanding the Domain-Range Scale of an Object#
Using colour.adaptation.chromatic_adaptation_CIE1994()
definition
docstring as an example, the Notes section features two tables.
The first table is for the domain, and lists the input arguments affected by the two domain-range scales and which normalisation they should adopt depending the domain-range scale in use:
Domain |
Scale - Reference |
Scale - 1 |
---|---|---|
|
[0, 100] |
[0, 1] |
|
[0, 100] |
[0, 1] |
The second table is for the range and lists the return value of the definition:
Range |
Scale - Reference |
Scale - 1 |
---|---|---|
|
[0, 100] |
[0, 1] |
Working with the Domain-Range Scales#
The current domain-range scale is returned with the
colour.get_domain_range_scale()
definition:
import colour
colour.get_domain_range_scale()
u'reference'
Changing from the ‘Reference’ default domain-range scale to ‘1’ is done
with the colour.set_domain_range_scale()
definition:
XYZ_1 = [28.00, 21.26, 5.27]
xy_o1 = [0.4476, 0.4074]
xy_o2 = [0.3127, 0.3290]
Y_o = 20
E_o1 = 1000
E_o2 = 1000
colour.adaptation.chromatic_adaptation_CIE1994(
XYZ_1, xy_o1, xy_o2, Y_o, E_o1, E_o2
)
array([ 24.03379521, 21.15621214, 17.64301199])
colour.set_domain_range_scale("1")
XYZ_1 = [0.2800, 0.2126, 0.0527]
Y_o = 0.2
colour.adaptation.chromatic_adaptation_CIE1994(
XYZ_1, xy_o1, xy_o2, Y_o, E_o1, E_o2
)
array([ 0.24033795, 0.21156212, 0.17643012])
The output tristimulus values with the ‘1’ domain-range scale are equal to those from ‘Reference’ default domain-range scale divided by 100.
Passing incorrectly scaled values to the
colour.adaptation.chromatic_adaptation_CIE1994()
definition
would result in unexpected values and a warning in that case:
colour.set_domain_range_scale("Reference")
colour.adaptation.chromatic_adaptation_CIE1994(
XYZ_1, xy_o1, xy_o2, Y_o, E_o1, E_o2
)
File "<ipython-input-...>", line 4, in <module>
E_o2)
File "/colour-science/colour/colour/adaptation/cie1994.py", line 134, in chromatic_adaptation_CIE1994
warning(('"Y_o" luminance factor must be in [18, 100] domain, '
/colour-science/colour/colour/utilities/verbose.py:207: ColourWarning: "Y_o" luminance factor must be in [18, 100] domain, unpredictable results may occur!
warn(*args, **kwargs)
array([ 0.17171825, 0.13731098, 0.09972054])
Setting the ‘1’ domain-range scale has the following effect on the
colour.adaptation.chromatic_adaptation_CIE1994()
definition:
As it expects values in domain [0, 100], scaling occurs and the
relevant input values, i.e. the values listed in the domain table, XYZ_1
and Y_o
are converted from domain [0, 1] to domain [0, 100] by
colour.utilities.to_domain_100()
definition and conversely
return value XYZ_2
is converted from range [0, 100] to range [0, 1] by
colour.utilities.from_range_100()
definition.
A convenient alternative to the colour.set_domain_range_scale()
definition is the colour.domain_range_scale
context manager and
decorator. It temporarily overrides Colour domain-range scale with given
scale value:
with colour.domain_range_scale("1"):
colour.adaptation.chromatic_adaptation_CIE1994(
XYZ_1, xy_o1, xy_o2, Y_o, E_o1, E_o2
)
[ 0.24033795 0.21156212 0.17643012]
Multiprocessing on Windows with Domain-Range Scales#
Windows does not have a fork system call, a consequence is that child processes do not necessarily inherit from changes made to global variables.
It has crucial consequences as Colour stores the current domain-range scale into a global variable.
The solution is to define an initialisation definition that defines the scale upon child processes spawning.
The colour.utilities.multiprocessing_pool
context manager conveniently
performs the required initialisation so that the domain-range scale is
propagated appropriately to child processes.
Safe Power and Division#
Colour default handling of fractional power and zero-division occurring during practical applications is managed via varous definitions and context managers.
Safe Power#
NaNs generation occurs when a negative number \(a\) is raised to the
fractional power \(p\). This can be avoided using the
colour.algebra.spow()
definition that raises to the power as follows:
\(sign(a) * |a|^p\).
To the extent possible, the colour.algebra.spow()
definition has been
used throughout the codebase. The default behaviour is controlled with the
following definitions:
colour.algebra.set_spow_enabled()
colour.algebra.spow_enable()
(Context Manager & Decorator)
Safe Division#
NaNs and +/- infs generation occurs when a number \(a\) is divided 0. This
can be avoided using the colour.algebra.sdiv()
definition. It has been
used wherever deemed relevant in the codebase. The default behaviour is
controlled with the following definitions:
colour.algebra.sdiv_mode()
(Context Manager & Decorator)
The following modes are available:
Numpy
: The current Numpy zero-division handling occurs.Ignore
: Zero-division occurs silently.Warning
: Zero-division occurs with a warning.Ignore Zero Conversion
: Zero-division occurs silently and NaNs or +/- infs values are converted to zeros. Seenumpy.nan_to_num()
definition for more details.Warning Zero Conversion
: Zero-division occurs with a warning and NaNs or +/- infs values are converted to zeros. Seenumpy.nan_to_num()
definition for more details.Ignore Limit Conversion
: Zero-division occurs silently and NaNs or +/- infs values are converted to zeros or the largest +/- finite floating point values representable by the division resultnumpy.dtype
. Seenumpy.nan_to_num()
definition for more details.Warning Limit Conversion
: Zero-division occurs with a warning and NaNs or +/- infs values are converted to zeros or the largest +/- finite floating point values representable by the division resultnumpy.dtype
.
colour.algebra.get_sdiv_mode()
'Ignore Zero Conversion'
colour.algebra.set_sdiv_mode("Numpy")
colour.UCS_to_uv([0, 0, 0])
/Users/kelsolaar/Documents/Development/colour-science/colour/colour/algebra/common.py:317: RuntimeWarning: invalid value encountered in true_divide
c = a / b
array([ nan, nan])
colour.algebra.set_sdiv_mode("Ignore Zero Conversion")
colour.UCS_to_uv([0, 0, 0])
array([ 0., 0.])
Advanced Concepts#
This page describes some advanced usage scenarios of Colour.
Environment#
Various environment variables can be used to modify Colour behaviour at runtime:
COLOUR_SCIENCE__DEFAULT_INT_DTYPE: Set the default integer dtype for most of Colour computations. Possible values are int32 and int64 (default). Changing the integer dtype will almost certainly break Colour! With great power comes great responsibility.
COLOUR_SCIENCE__DEFAULT_FLOAT_DTYPE: Set the float dtype for most of Colour computations. Possible values are float16, float32 and float64 (default). Changing the float dtype might result in various Colour functionality breaking entirely. With great power comes great responsibility.
COLOUR_SCIENCE__COLOUR__SHOW_WARNINGS_WITH_TRACEBACK: Result in the
warnings.showwarning()
definition to be replaced with thecolour.utilities.show_warning()
definition and thus providing complete traceback from the point where the warning occurred.
Caching#
Colour uses various internal caches to improve speed and prevent redundant processes, notably for spectral related computations.
The internal caches are managed with the colour.utilities.CACHE_REGISTRY cache registry object:
import colour
print(colour.utilities.CACHE_REGISTRY)
{'colour.colorimetry.spectrum._CACHE_RESHAPED_SDS_AND_MSDS': '0 item(s)',
'colour.colorimetry.tristimulus_values._CACHE_LAGRANGE_INTERPOLATING_COEFFICIENTS': '0 '
'item(s)',
'colour.colorimetry.tristimulus_values._CACHE_SD_TO_XYZ': '0 item(s)',
'colour.colorimetry.tristimulus_values._CACHE_TRISTIMULUS_WEIGHTING_FACTORS': '0 '
'item(s)',
'colour.quality.cfi2017._CACHE_TCS_CIE2017': '0 item(s)',
'colour.volume.macadam_limits._CACHE_OPTIMAL_COLOUR_STIMULI_XYZ': '0 item(s)',
'colour.volume.macadam_limits._CACHE_OPTIMAL_COLOUR_STIMULI_XYZ_TRIANGULATIONS': '0 '
'item(s)',
'colour.volume.spectrum._CACHE_OUTER_SURFACE_XYZ': '0 item(s)',
'colour.volume.spectrum._CACHE_OUTER_SURFACE_XYZ_POINTS': '0 item(s)'}
See colour.utilities.CacheRegistry class documentation for more information on how to manage the cache registry.
Using Colour without Scipy#
With the release of Colour 0.3.8, SciPy became a requirement.
Scipy is notoriously hard to compile, especially on Windows. Some Digital Content Creation (DCC) applications are shipping Python interpreters compiled with versions of Visual Studio such as 2011 or 2015. Those are incompatible with the Python Wheels commonly built with Visual Studio 2008 (Python 2.7) or Visual Studio 2017 (Python 3.6).
It is however possible to use Colour in a partially broken state and mock Scipy by using the mock_for_colour.py module.
Assuming it is available for import, a typical usage would be as follows:
import sys
from mock_for_colour import MockModule
for module in (
"scipy",
"scipy.interpolate",
"scipy.linalg",
"scipy.ndimage",
"scipy.ndimage.filters",
"scipy.spatial",
"scipy.spatial.distance",
"scipy.optimize",
):
sys.modules[str(module)] = MockModule(str(module))
import colour
xyY = (0.4316, 0.3777, 0.1008)
colour.xyY_to_XYZ(xyY)
array([ 0.11518475, 0.1008 , 0.05089373])
Or directly using the mock_scipy_for_colour
definition:
from mock_for_colour import mock_scipy_for_colour
mock_scipy_for_colour()
import colour
xyY = (0.4316, 0.3777, 0.1008)
colour.xyY_to_XYZ(xyY)
array([ 0.11518475, 0.1008 , 0.05089373])
Anything relying on the spectral code will be unusable, but a great amount of useful functionality will still be available.
Bibliography#
Mekides Assefa Abebe, Tania Pouli, Mohamed-Chaker Larabi, and Erik Reinhard. Perceptual Lightness Modeling for High-Dynamic-Range Imaging. ACM Transactions on Applied Perception, 15(1):1–19, July 2017. doi:10.1145/3086577.
Peter G. Barten. Contrast Sensitivity of the Human Eye and Its Effects on Image Quality. Number 1999. SPIE, December 1999. ISBN 978-0-8194-7849-8. doi:10.1117/3.353254.
Peter G. J. Barten. Formula for the contrast sensitivity of the human eye. In Yoichi Miyake and D. Rene Rasmussen, editors, Proceedings of SPIE, volume 5294, 231–238. December 2003. doi:10.1117/12.537476.
S. Bianco and R. Schettini. Two new von Kries based chromatic adaptation transforms found by numerical optimization. Color Research & Application, 35(3):184–192, June 2010. doi:10.1002/col.20573.
Barry A. Bodhaine, Norman B. Wood, Ellsworth G. Dutton, and James R. Slusser. On Rayleigh Optical Depth Calculations. Journal of Atmospheric and Oceanic Technology, 16(11):1854–1861, November 1999. doi:10.1175/1520-0426(1999)016<1854:ORODC>2.0.CO;2.
Tim Borer. Private Discussion with Mansencal, T. and Shaw, N. 2017.
Paul Bourke. Intersection point of two line segments in 2 dimensions. http://paulbourke.net/geometry/pointlineplane/.
Paul Bourke. Trilinear Interpolation. http://paulbourke.net/miscellaneous/interpolation/.
Edwin J. Breneman. Corresponding chromaticities for different states of adaptation to complex visual fields. Journal of the Optical Society of America A, 4(6):1115, June 1987. doi:10.1364/JOSAA.4.001115.
Michael H. Brill and Sabine Susstrunk. Repairing gamut problems in CIECAM02: A progress report. Color Research & Application, 33(5):424–426, October 2008. doi:10.1002/col.20432.
A. D. Broadbent. Calculation from the Original Experimental Data of the Cie 1931 RGB Standard Observer Spectral Chromaticity Co-Ordinates and Color Matching Functions. http://www.cis.rit.edu/mcsl/research/1931.php, 2009.
Wilhelm Burger and Mark James Burge. Principles of Digital Image Processing. Springer London, London, 2009. ISBN 978-1-84800-194-7. doi:10.1007/978-1-84800-195-4.
Ricardo Cabello. PlaneGeometry.js. https://github.com/mrdoob/three.js/blob/dev/src/geometries/PlaneGeometry.js.
Renbo Cao, H Joel Trussell, and Renzo Shamey. Comparison of the performance of inverse transformation methods from OSA-UCS to CIEXYZ. Journal of the Optical Society of America A, 30(8):1508, August 2013. doi:10.1364/JOSAA.30.001508.
E.C. Carter, J.D. Schanda, R. Hirschler, S. Jost, M.R. Luo, M. Melgosa, Y. Ohno, M.R. Pointer, D.C. Rich, F. Vienot, L. Whitehead, and J.H. Wold. CIE 015:2018 Colorimetry, 4th Edition. Technical Report, International Commission on Illumination, Vienna, October 2018. doi:10.25039/TR.015.2018.
Saullo Castro. Numpy: Fastest way of computing diagonal for each row of a 2d array. http://stackoverflow.com/questions/26511401/numpy-fastest-way-of-computing-diagonal-for-each-row-of-a-2d-array/26517247#26517247, 2014.
Paul Centore. The Munsell and Kubelka-Munk Toolbox. http://www.munsellcolourscienceforpainters.com/MunsellAndKubelkaMunkToolbox/MunsellAndKubelkaMunkToolbox.html.
Paul Centore. An open-source inversion algorithm for the Munsell renotation. Color Research & Application, 37(6):455–464, December 2012. doi:10.1002/col.20715.
Paul Centore. MunsellAndKubelkaMunkToolboxApr2014 - GeneralRoutines/CIELABtoApproxMunsellSpec.m. 2014.
Paul Centore. MunsellAndKubelkaMunkToolboxApr2014 - MunsellRenotationRoutines/ChromDiagHueAngleToMunsellHue.m. 2014.
Paul Centore. MunsellAndKubelkaMunkToolboxApr2014 - MunsellRenotationRoutines/FindHueOnRenotationOvoid.m. 2014.
Paul Centore. MunsellAndKubelkaMunkToolboxApr2014 - MunsellRenotationRoutines/MaxChromaForExtrapolatedRenotation.m. 2014.
Paul Centore. MunsellAndKubelkaMunkToolboxApr2014 - MunsellRenotationRoutines/MunsellHueToASTMHue.m. 2014.
Paul Centore. MunsellAndKubelkaMunkToolboxApr2014 - MunsellRenotationRoutines/MunsellHueToChromDiagHueAngle.m. 2014.
Paul Centore. MunsellAndKubelkaMunkToolboxApr2014 - MunsellRenotationRoutines/MunsellToxyForIntegerMunsellValue.m. 2014.
Paul Centore. MunsellAndKubelkaMunkToolboxApr2014 - MunsellRenotationRoutines/MunsellToxyY.m. 2014.
Paul Centore. MunsellAndKubelkaMunkToolboxApr2014 - MunsellRenotationRoutines/xyYtoMunsell.m. 2014.
Paul Centore. MunsellAndKubelkaMunkToolboxApr2014 - MunsellSystemRoutines/BoundingRenotationHues.m. 2014.
Paul Centore. MunsellAndKubelkaMunkToolboxApr2014 - MunsellSystemRoutines/LinearVsRadialInterpOnRenotationOvoid.m. 2014.
Peter Chamberlain. LUT documentation (to create from another program). https://forum.blackmagicdesign.com/viewtopic.php?f=21&t=40284#p232952, 2015.
Vien Cheung, Stephen Westland, David Connah, and Caterina Ripamonti. A comparative study of the characterisation of colour cameras by means of neural networks and polynomial transforms. Coloration Technology, 120(1):19–25, 2004. doi:10.1111/j.1478-4408.2004.tb00201.x.
Sean Cooper and Harald Brendel. ARRI LogC4 Logarithmic Color Space SPECIFICATION. 2022.
Russell Cottrell. The Russell RGB working color space.
Matthew Cowan, Glenn Kennel, Thomas Maier, and Brad Walker. Contrast Sensitivity Experiment to Determine the Bit Depth for Digital Cinema. SMPTE Motion Imaging Journal, 113(9):281–292, September 2004. doi:10.5594/j11549.
G. Cui, M. R. Luo, B. Rigg, G. Roesler, and K. Witt. Uniform colour spaces based on the DIN99 colour-difference formula. Color Research & Application, 27(4):282–290, 2002. doi:10.1002/col.10066.
Maryam Mohammadzadeh Darrodi, Graham Finlayson, Teresa Goodman, and Michal Mackiewicz. Reference data set for camera spectral sensitivity estimation. Journal of the Optical Society of America A, 32(3):381, March 2015. doi:10.1364/JOSAA.32.000381.
Aurelien David, Paul T. Fini, Kevin W. Houser, Yoshi Ohno, Michael P. Royer, Kevin A. G. Smet, Minchen Wei, and Lorne Whitehead. Development of the IES method for evaluating the color rendition of light sources. Optics Express, 23(12):15888, June 2015. doi:10.1364/OE.23.015888.
Wendy Davis and Yoshiro Ohno. Color quality scale. Optical Engineering, 49(3):033602, March 2010. doi:10.1117/1.3360335.
Scott Dyer, Alexander Forsythe, Jonathon Irons, Thomas Mansencal, and Miaoqi Zhu. RAW to ACES. 2017.
Fritz Ebner and Mark D. Fairchild. Finding constant hue surfaces in color space. In Giordano B. Beretta and Reiner Eschbach, editors, Proc. SPIE 3300, Color Imaging: Device-Independent Color, Color Hardcopy, and Graphic Arts III, (2 January 1998), 107–117. January 1998. doi:10.1117/12.298269.
U. Murat Erdem. Fast Line Segment Intersection. http://www.mathworks.com/matlabcentral/fileexchange/27205-fast-line-segment-intersection.
Turan Erdogan. How to Calculate Luminosity, Dominant Wavelength, and Excitation Purity.
M. Fairchild and D. Wyble. Colorimetric Characterization of The Apple Studio Display (flat panel LCD). 1998.
Mark D Fairchild and Ping-hsu Chen. Brightness, lightness, and specifying color in high-dynamic-range scenes and images. In Susan P. Farnand and Frans Gaykema, editors, Proc. SPIE 7867, Image Quality and System Performance VIII, 78670O. January 2011. doi:10.1117/12.872075.
Mark D Fairchild and Luke Hellwig. Private Discussion with Mansencal, T. 2022.
Mark D. Fairchild. Fairchild YSh.
Mark D. Fairchild. Formulation and testing of an incomplete-chromatic-adaptation model. Color Research & Application, 16(4):243–250, August 1991. doi:10.1002/col.5080160406.
Mark D. Fairchild. Refinement of the RLAB color space. Color Research & Application, 21(5):338–346, October 1996. doi:10.1002/(SICI)1520-6378(199610)21:5<338::AID-COL3>3.0.CO;2-Z.
Mark D. Fairchild. CIECAM02. In Color Appearance Models, pages 289–301. Wiley, second edition, 2004.
Mark D. Fairchild. Chromatic Adaptation Models. In Color Appearance Models, pages 4179–4252. Wiley, third edition, 2013.
Mark D. Fairchild. The Hunt Model. In Color Appearance Models, pages 5094–5556. Wiley, third edition, 2013.
Mark D. Fairchild. The Nayatani et al. Model. In Color Appearance Models, pages 4810–5085. Wiley, third edition, 2013.
Mark D. Fairchild. The RLAB Model. In Color Appearance Models, pages 5563–5824. Wiley, third edition, 2013.
Mark D. Fairchild. ATD Model. In Color Appearance Models, pages 5852–5991. Wiley, third edition, 2013.
Mark D. Fairchild. FAIRCHILD'S 1990 MODEL. In Color Appearance Models, pages 4418–4495. Wiley, third edition, 2013.
Mark D. Fairchild. IPT Colourspace. In Color Appearance Models, pages 6197–6223. Wiley, third edition, 2013.
Mark D. Fairchild. LLAB Model. In Color Appearance Models, pages 6025–6178. Wiley, third edition, 2013.
Mark D. Fairchild and David R. Wyble. Hdr-CIELAB and hdr-IPT: Simple Models for Describing the Color of High-Dynamic-Range and Wide-Color-Gamut Images. In Proc. of Color and Imaging Conference, 322–326. 2010.
Hugh S. Fairman. The calculation of weight factors for tristimulus integration. Color Research & Application, 10(4):199–203, 1985. doi:10.1002/col.5080100407.
Hugh S. Fairman, Michael H. Brill, and Henry Hemmendinger. How the CIE 1931 color-matching functions were derived from Wright-Guild data. Color Research & Application, 22(1):11–23, February 1997. doi:10.1002/(SICI)1520-6378(199702)22:1<11::AID-COL4>3.0.CO;2-7.
Graham D. Finlayson, Michal MacKiewicz, and Anya Hurlbert. Color Correction Using Root-Polynomial Regression. IEEE Transactions on Image Processing, 24(5):1460–1470, May 2015. doi:10.1109/TIP.2015.2405336.
Alex Forsythe. Private Discussion with Mansencal, T. 2018.
Jan Fröhlich. Encoding high dynamic range and wide color gamut imagery. 2017.
Hugo Gaggioni, Patel Dhanendra, Jin Yamashita, N. Kawada, K. Endo, and Curtis Clark. S-Log: A new LUT for digital production mastering and interchange applications.
Pedro A. García, Rafael Huertas, Manuel Melgosa, and Guihua Cui. Measurement of the relationship between perceived and computed color differences. Journal of the Optical Society of America A, 24(7):1823, July 2007. doi:10.1364/JOSAA.24.001823.
L. G. Glasser, A. H. McKinney, C. D. Reilly, and P. D. Schnelle. Cube-Root Color Coordinate System. Journal of the Optical Society of America, 48(10):736, October 1958. doi:10.1364/JOSA.48.000736.
S. Lee Guth. Further applications of the ATD model for color vision. In Eric Walowit, editor, Proc. SPIE 2414, Device-Independent Color Imaging II, volume 2414, 12–26. April 1995. doi:10.1117/12.206546.
Radim Halir and Jan Flusser. Numerically Stable Direct Least Squares Fitting Of Ellipses. 1998. doi:10.1.1.1.7559.
Allan Hanbury. A 3D-Polar Coordinate Colour Representation Well Adapted to Image Analysis. In Josef Bigun and Tomas Gustavsson, editors, Image Analysis, 804–811. Berlin, Heidelberg, 2003. Springer Berlin Heidelberg.
Luke Hellwig and Mark D. Fairchild. Using Gaussian Spectra to Derive a Hue-linear Color Space. Journal of Perceptual Imaging, 2020. doi:10.2352/J.Percept.Imaging.2020.3.2.020401.
Luke Hellwig and Mark D. Fairchild. Brightness, lightness, colorfulness, and chroma in \textsc CIECAM02 and \textsc CAM16. Color Research & Application, pages col.22792, March 2022. doi:10.1002/col.22792.
Luke Hellwig, Dale Stolitzka, and Mark D. Fairchild. Extending CIECAM02 and CAM16 for the Helmholtz–Kohlrausch effect. Color Research & Application, pages col.22793, June 2022. doi:10.1002/col.22793.
Joseph Holmes. Ekta Space PS 5.
Jim Houston. Private Discussion with Mansencal, T. 2015.
Min Huang, Guihua Cui, Manuel Melgosa, Manuel Sánchez-Marañón, Changjun Li, M. Ronnier Luo, and Haoxue Liu. Power functions improving the performance of color-difference formulas. Optical Society of America, 23(1):597–610, 2015. doi:10.1364/OE.23.000597.
Po-Chieh Hung and Roy S. Berns. Determination of constant Hue Loci for a CRT gamut and their predictions using color appearance spaces. Color Research & Application, 20(5):285–295, October 1995. doi:10.1002/col.5080200506.
R.W.G. Hunt. The Reproduction of Colour. John Wiley & Sons, Ltd, Chichester, UK, sixth edition, September 2004. ISBN 978-0-470-02427-0. doi:10.1002/0470024275.
Wenzel Jakob and Johannes Hanika. A Low-Dimensional Function Space for Efficient Spectral Upsampling. Computer Graphics Forum, 38(2):147–155, May 2019. doi:10.1111/cgf.13626.
Jun Jiang, Dengyu Liu, Jinwei Gu, and Sabine Susstrunk. What is the space of spectral sensitivity functions for digital color cameras? In 2013 IEEE Workshop on Applications of Computer Vision (WACV), 168–179. IEEE, January 2013. doi:10.1109/WACV.2013.6475015.
Bongsoon Kang, Ohak Moon, Changhee Hong, Honam Lee, Bonghwan Cho, and Youngsun Kim. Design of advanced color: Temperature control system for HDTV applications. Journal of the Korean Physical Society, 41(6):865–871, 2002.
Paul Kienzle, Nikunj Patel, and James Krycka. Refl1d.numpyerrors - Refl1D v0.6.19 documentation. http://www.reflectometry.org/danse/docs/refl1d/_modules/refl1d/numpyerrors.html, 2011.
Mh Kim, T Weyrich, and J Kautz. Modeling Human Color Perception under Extended Luminance Levels. ACM Transactions on Graphics, 28(3):27:1–27:9, 2009. doi:10.1145/1531326.1531333.
Richard Kirk. Truelight Software Library 2.0. 2006.
Ivan A. Konovalenko. proLab_param.m. 2021.
Ivan A. Konovalenko, Anna A. Smagina, Dmitry P. Nikolaev, and Petr P. Nikolaev. ProLab: perceptually uniform projective colour coordinate system. arXiv:2012.07653 [cs], January 2021. arXiv:2012.07653.
M Krystek. An algorithm to calculate correlated colour temperature. Color Research & Application, 10(1):38–40, 1985. doi:10.1002/col.5080100109.
Changjun Li, Zhiqiang Li, Zhifeng Wang, Yang Xu, Ming Ronnier Luo, Guihua Cui, Manuel Melgosa, Michael H Brill, and Michael Pointer. Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS. Color Research & Application, 42(6):703–718, December 2017. doi:10.1002/col.22131.
Changjun Li, Ming Ronnier Luo, Bryan Rigg, and Robert W. G. Hunt. CMC 2000 chromatic adaptation transform: CMCCAT2000. Color Research & Application, 27(1):49–58, February 2002. doi:10.1002/col.10005.
Changjun Li, Esther Perales, Ming Ronnier Luo, and Francisco Martinez-verdu. The Problem with CAT02 and Its Correction. 2007.
Bruce Lindbloom. Delta E (CIE 1976). http://brucelindbloom.com/Eqn_DeltaE_CIE76.html, 2003.
Bruce Lindbloom. XYZ to xyY. http://www.brucelindbloom.com/Eqn_XYZ_to_xyY.html, 2003.
Bruce Lindbloom. Spectral Power Distribution of a CIE D-Illuminant. http://www.brucelindbloom.com/Eqn_DIlluminant.html, 2007.
Bruce Lindbloom. Chromatic Adaptation. http://brucelindbloom.com/Eqn_ChromAdapt.html, 2009.
Bruce Lindbloom. Delta E (CIE 2000). http://brucelindbloom.com/Eqn_DeltaE_CIE2000.html, 2009.
Bruce Lindbloom. Delta E (CMC). http://brucelindbloom.com/Eqn_DeltaE_CMC.html, 2009.
Bruce Lindbloom. xyY to XYZ. http://www.brucelindbloom.com/Eqn_xyY_to_XYZ.html, 2009.
Bruce Lindbloom. Delta E (CIE 1994). http://brucelindbloom.com/Eqn_DeltaE_CIE94.html, 2011.
Bruce Lindbloom. RGB Working Space Information. http://www.brucelindbloom.com/WorkingSpaceInfo.html, 2014.
Bruce Lindbloom. About the Lab Gamut. http://www.brucelindbloom.com/LabGamutDisplayHelp.html, 2015.
Taoran Lu, Fangjun Pu, Peng Yin, Tao Chen, Walt Husak, Jaclyn Pytlarz, Robin Atkins, Jan Froehlich, and Guan-Ming Su. ITP Colour Space and Its Compression Performance for High Dynamic Range and Wide Colour Gamut Video Distribution. ZTE Communications, 14(1):32–38, 2016.
M. Ronnier Luo, Guihua Cui, and Changjun Li. Uniform colour spaces based on CIECAM02 colour appearance model. Color Research & Application, 31(4):320–330, August 2006. doi:10.1002/col.20227.
M. Ronnier Luo and Peter A. Rhodes. Corresponding-colour datasets. Color Research & Application, 24(4):295–296, August 1999. doi:10.1002/(SICI)1520-6378(199908)24:4<295::AID-COL10>3.0.CO;2-K.
Ming Ronnier Luo and Changjun Li. CIECAM02 and Its Recent Developments. In Christine Fernandez-Maloigne, editor, Advanced Color Image Processing and Analysis, pages 19–58. Springer New York, New York, NY, 2013. doi:10.1007/978-1-4419-6190-7.
Ming Ronnier Luo, Mei-Chun Lo, and Wen-Guey Kuo. The LLAB (l:c) colour model. Color Research & Application, 21(6):412–429, December 1996. doi:10.1002/(SICI)1520-6378(199612)21:6<412::AID-COL4>3.0.CO;2-Z.
Ming Ronnier Luo and Ján Morovic. Two Unsolved Issues in Colour Management - Colour Appearance and Gamut Mapping. In Conference: 5th International Conference on High Technology: Imaging Science and Technology – Evolution & Promise, 136–147. 1996.
David L. MacAdam. Maximum Visual Efficiency of Colored Materials. Journal of the Optical Society of America, 25(11):361–367, November 1935. doi:10.1364/JOSA.25.000361.
David L. Macadam. Visual Sensitivities to Color Differences in Daylight. Journal of the Optical Society of America, 32(5):28, 1942. doi:10.1364/JOSA.32.000247.
G.M. Machado, M.M. Oliveira, and L. Fernandes. A Physiologically-based Model for Simulation of Color Vision Deficiency. IEEE Transactions on Visualization and Computer Graphics, 15(6):1291–1298, November 2009. doi:10.1109/TVCG.2009.113.
Gustavo Mello Machado. A model for simulation of color vision deficiency and a color contrast enhancement technique for dichromats. 2010.
Ian Mallett and Cem Yuksel. Spectral Primary Decomposition for Rendering with sRGB Reflectance. Eurographics Symposium on Rendering - DL-only and Industry Track, pages 7 pages, 2019. doi:10.2312/SR.20191216.
Henrique Malvar and Gary Sullivan. YCoCg-R: A Color Space with RGB Reversibility and Low Dynamic Range. 2003.
Thomas Mansencal. Lookup.
Thomas Mansencal. Structure.
Thomas Mansencal. RED Colourspaces Derivation. https://www.colour-science.org/posts/red-colourspaces-derivation, 2015.
Thomas Mansencal. How is the visible gamut bounded? https://stackoverflow.com/a/48396021/931625, 2018.
Thomas Mansencal. Colour - Datasets. 2019. doi:10.5281/zenodo.3362520.
Manuel Melgosa. CIE / ISO new standard: CIEDE2000. 2013.
Johannes Meng, Florian Simon, Johannes Hanika, and Carsten Dachsbacher. Physically Meaningful Rendering using Tristimulus Colours. Computer Graphics Forum, 34(4):31–40, July 2015. doi:10.1111/cgf.12676.
Scott Miller. A Perceptual EOTF for Extended Dynamic Range Imagery. 2014.
Wojciech Mokrzycki and Maciej Tatol. Color difference Delta E - A survey. Machine Graphics and Vision, 20:383–411, April 2011.
Nathan Moroney. A Radial Sampling of the OSA Uniform Color Scales. Color and Imaging Conference, 2003(1):175–180, 2003.
Nathan Moroney, Mark D. Fairchild, Robert W. G. Hunt, Changjun Li, Ming Ronnier Luo, and Todd Newman. The CIECAM02 color appearance model. Color and Imaging Conference, pages 23–27, 2002.
Ján Morovič and M. Ronnier Luo. Calculating medium and image gamut boundaries for gamut mapping. Color Research and Application, 25(6):394–401, 2000. doi:10.1002/1520-6378(200012)25:63.0.CO;2-Y.
Luvin Munish Ragoo and Ivar Farup. Optimising a Euclidean Colour Space Transform for Colour Order and Perceptual Uniformity. Color and Imaging Conference, 29(1):282–287, November 2021. doi:10.2352/issn.2169-2629.2021.29.282.
Graeme Nattress. Private Discussion with Shaw, N. 2016.
Yoshinobu Nayatani. Simple estimation methods for the Helmholtz—Kohlrausch effect. Color Research & Application, 22(6):385–401, 1997. doi:10.1002/(SICI)1520-6378(199712)22:6<385::AID-COL6>3.0.CO;2-R.
Yoshinobu Nayatani, Hiroaki Sobagaki, and Kenjiro Hashimoto Tadashi Yano. Lightness dependency of chroma scales of a nonlinear color-appearance model and its latest formulation. Color Research & Application, 20(3):156–167, June 1995. doi:10.1002/col.5080200305.
Sidney M. Newhall, Dorothy Nickerson, and Deane B. Judd. Final Report of the OSA Subcommittee on the Spacing of the Munsell Colors. Journal of the Optical Society of America, 33(7):385, July 1943. doi:10.1364/JOSA.33.000385.
Yoshi Ohno. Spectral design considerations for white LED color rendering. Optical Engineering, 44(11):111302, 2005. doi:10.1117/1.2130694.
Yoshiro Ohno. Practical Use and Calculation of CCT and Duv. LEUKOS, 10(1):47–55, January 2014. doi:10.1080/15502724.2014.839020.
Yoshiro Ohno and Wendy Davis. NIST CQS simulation. 2008.
Yoshiro Ohno and Wendy Davis. NIST CQS simulation. 2013.
N. Ohta. The basis of color reproduction engineering. 1997.
H. Otsu, M. Yamamoto, and T. Hachisuka. Reproducing Spectral Reflectances From Tristimulus Colours. Computer Graphics Forum, 37(6):370–381, September 2018. doi:10.1111/cgf.13332.
Björn Ottosson. A perceptual color space for image processing. https://bottosson.github.io/posts/oklab/, 2020.
Michael R. Pointer. Pointer's Gamut Data. 1980.
Herilalaina Rakotoarison. Bunch. March 2017.
Muhammad Safdar, Guihua Cui, Youn Jin Kim, and Ming Ronnier Luo. Perceptually uniform color space for image signals including high dynamic range and wide gamut. Optics Express, 25(13):15131, June 2017. doi:10.1364/OE.25.015131.
Muhammad Safdar, Jon Y. Hardeberg, Youn Jin Kim, and Ming Ronnier Luo. A Colour Appearance Model based on J z a z b z Colour Space. Color and Imaging Conference, 2018(1):96–101, November 2018. doi:10.2352/ISSN.2169-2629.2018.26.96.
Muhammad Safdar, Jon Yngve Hardeberg, and Ming Ronnier Luo. ZCAM, a colour appearance model based on a high dynamic range uniform colour space. Optics Express, 29(4):6036, February 2021. doi:10.1364/OE.413659.
Madenda Sarifuddin and Rokia Missaoui. A New Perceptually Uniform Color Space with Associated Color Similarity Measure for ContentBased Image and Video Retrieval. 2005.
Gaurav Sharma, Wencheng Wu, and Edul N. Dalal. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Research & Application, 30(1):21–30, February 2005. doi:10.1002/col.20070.
Peter Shirley and David Hart. The prismatic color space for rgb computations. 2015.
Daniele Siragusano. Private Discussion with Shaw, Nick. 2018.
Alvy Ray Smith. Color gamut transform pairs. In Proceedings of the 5th Annual Conference on Computer Graphics and Interactive Techniques - SIGGRAPH '78, 12–19. New York, New York, USA, 1978. ACM Press. doi:10.1145/800248.807361.
Brian Smits. An RGB-to-Spectrum Conversion for Reflectances. Journal of Graphics Tools, 4(4):11–22, January 1999. doi:10.1080/10867651.1999.10487511.
K E Spaulding, G J Woolfe, and E J Giorgianni. Reference Input/Output Medium Metric RGB Color Encodings (RIMM/ROMM RGB). 2000.
Nick Spiker. Private Discussion with Mansencal, T. 2015.
E. I. Stearns and R. E. Stearns. An example of a method for correcting radiance data for Bandpass error. Color Research & Application, 13(4):257–259, August 1988. doi:10.1002/col.5080130410.
Andrew Stockman and Lindsay T. Sharpe. Cone Fundamentals. http://www.cvrl.org/cones.htm, 2000.
Sabine Susstrunk, Robert Buckley, and Steve Swen. Standard RGB Color Spaces. 1999.
Sabine E. Susstrunk, Jack M. Holm, and Graham D. Finlayson. Chromatic adaptation performance of different RGB sensors. In Reiner Eschbach and Gabriel G. Marcu, editors, Photonics West 2001 - Electronic Imaging, volume 4300, 172–183. December 2000. doi:10.1117/12.410788.
Larry Thorpe. CANON-LOG TRANSFER CHARACTERISTIC. 2012.
Tashi Trieu. Private Discussion with Mansencal, T. 2015.
Greg Ward. Private Discussion with Mansencal, T. 2016.
Greg Ward and Elena Eydelberg-Vileshin. Picture Perfect RGB Rendering Using Spectral Prefiltering and Sharp Color Primaries. Eurographics workshop on Rendering, pages 117–124, 2002. doi:10.2312/EGWR/EGWR02/117-124.
Andrew B. Watson and John I. Yellott. A unified formula for light-adapted pupil size. Journal of Vision, 12(10):12, September 2012. doi:10.1167/12.10.12.
Stephen Westland and Caterina Ripamonti. Table 8.2. In Computational Colour Science Using MATLAB, pages 137. John Wiley & Sons, Ltd, Chichester, UK, first edition, March 2004. doi:10.1002/0470020326.
Stephen Westland, Caterina Ripamonti, and Vien Cheung. Correction for Spectral Bandpass. In Computational Colour Science Using MATLAB, pages 38. second edition, 2012.
Stephen Westland, Caterina Ripamonti, and Vien Cheung. Extrapolation Methods. In Computational Colour Science Using MATLAB, pages 38. second edition, 2012.
Stephen Westland, Caterina Ripamonti, and Vien Cheung. Interpolation Methods. In Computational Colour Science Using MATLAB, pages 29–37. second edition, 2012.
Stephen Westland, Caterina Ripamonti, and Vien Cheung. CMCCAT2000. In Computational Colour Science Using MATLAB, pages 83–86. second edition, 2012.
Stephen Westland, Caterina Ripamonti, and Vien Cheung. CMCCAT97. In Computational Colour Science Using MATLAB, pages 80. second edition, 2012.
Günter Wyszecki. Proposal for a New Color-Difference Formula. Journal of the Optical Society of America, 53(11):1318, November 1963. doi:10.1364/JOSA.53.001318.
Günther Wyszecki and W S Stiles. Equation I(1.2.1). In Color Science: Concepts and Methods, Quantitative Data and Formulae, pages 8. Wiley, 2000.
Günther Wyszecki and W S Stiles. Table 2(5.4.1) MacAdam Ellipses (Observer PGN) Observed and Calculated on the Basis of a Normal Distribution of Color Matches about a Color Center (Silberstein and MacAdam, 1945). In Color Science: Concepts and Methods, Quantitative Data and Formulae, pages 309. Wiley, 2000.
Günther Wyszecki and W. S. Stiles. Integration Replaced by Summation. In Color Science: Concepts and Methods, Quantitative Data and Formulae, pages 158–163. Wiley, 2000.
Günther Wyszecki and W. S. Stiles. Standard Photometric Observers. In Color Science: Concepts and Methods, Quantitative Data and Formulae, pages 256–259,395. Wiley, 2000.
Günther Wyszecki and W. S. Stiles. Table 1(3.11) Isotemperature Lines. In Color Science: Concepts and Methods, Quantitative Data and Formulae, pages 228. Wiley, 2000.
Günther Wyszecki and W. S. Stiles. Table 1(3.3.3). In Color Science: Concepts and Methods, Quantitative Data and Formulae, pages 138–139. Wiley, 2000.
Günther Wyszecki and W. S. Stiles. Table II(3.7). In Color Science: Concepts and Methods, Quantitative Data and Formulae, pages 778–779. Wiley, 2000.
Günther Wyszecki and W. S. Stiles. Table I(3.7). In Color Science: Concepts and Methods, Quantitative Data and Formulae, pages 776–777. Wiley, 2000.
Günther Wyszecki and W. S. Stiles. Table I(6.5.3) Whiteness Formulae (Whiteness Measure Denoted by W). In Color Science: Concepts and Methods, Quantitative Data and Formulae, pages 837–839. Wiley, 2000.
Günther Wyszecki and W. S. Stiles. The CIE 1964 Standard Observer. In Color Science: Concepts and Methods, Quantitative Data and Formulae, pages 141. Wiley, 2000.
Günther Wyszecki and W. S. Stiles. CIE Method of Calculating D-Illuminants. In Color Science: Concepts and Methods, Quantitative Data and Formulae, pages 145–146. Wiley, 2000.
Günther Wyszecki and W. S. Stiles. CIE 1976 (L*u*v*)-Space and Color-Difference Formula. In Color Science: Concepts and Methods, Quantitative Data and Formulae, pages 167. Wiley, 2000.
Günther Wyszecki and W. S. Stiles. DISTRIBUTION TEMPERATURE, COLOR TEMPERATURE, AND CORRELATED COLOR TEMPERATURE. In Color Science: Concepts and Methods, Quantitative Data and Formulae, pages 224–229. Wiley, 2000.
Rory Yorke. Python: Change format of np.array or allow tolerance in in1d function. http://stackoverflow.com/a/23521245/931625, 2014.
Qiyan Zhai and Ming R. Luo. Study of chromatic adaptation via neutral white matches on different viewing media. Optics Express, 26(6):7724, March 2018. doi:10.1364/OE.26.007724.
Adobe Systems. Adobe RGB (1998) Color Image Encoding. 2005.
Adobe Systems. Adobe DNG Software Development Kit (SDK) - 1.3.0.0 - dng_sdk_1_3/dng_sdk/source/dng_temperature.cpp::dng_temperature::xy_coord. 2013.
Adobe Systems. Adobe DNG Software Development Kit (SDK) - 1.3.0.0 - dng_sdk_1_3/dng_sdk/source/dng_temperature.cpp::dng_temperature::Set_xy_coord. 2013.
Adobe Systems. Cube LUT Specification. 2013.
ANSI. Specification of ROMM RGB. 2003.
ANSI and IES Color Committee. ANSI/IES TM-30-18 - IES Method for Evaluating Light Source Color Rendition. ANSI/IES, 2018. ISBN 978-0-87995-379-9.
Apple Inc. displayP3. https://developer.apple.com/documentation/coregraphics/cgcolorspace/1408916-displayp3, 2019.
ARRI. ALEXA - Log C Curve - Usage in VFX. 2012.
Association of Radio Industries and Businesses. Essential Parameter Values for the Extended Image Dynamic Range Television (EIDRTV) System for Programme Production. 2015.
ASTM International. ASTM D1535-89 - Standard Practice for Specifying Color by the Munsell System. 1989.
ASTM International. ASTM D2244-07 - Standard Practice for Calculation of Color Tolerances and Color Differences from Instrumentally Measured Color Coordinates. 2007. doi:10.1520/D2244-16.
ASTM International. ASTM D1535-08e1 - Standard Practice for Specifying Color by the Munsell System. 2008. doi:10.1520/D1535-08E01.
ASTM International. ASTM E2022-11 - Standard Practice for Calculation of Weighting Factors for Tristimulus Integration. 2011. doi:10.1520/E2022-11.
ASTM International. ASTM E308-15 - Standard Practice for Computing the Colors of Objects by Using the CIE System. 2015. doi:10.1520/E0308-15.
ASTM International. ASTM E313-15e1 - Standard Practice for Calculating Yellowness and Whiteness Indices from Instrumentally Measured Color Coordinates. 2015. doi:10.1520/E0313-20.
BabelColor. The ColorChecker (since 1976!). http://www.babelcolor.com/main_level/ColorChecker.htm, 2012.
BabelColor. ColorChecker RGB and spectra. 2012.
Blackmagic Design. Wide Gamut Intermediate DaVinci Resolve. 2020.
Blackmagic Design. DaVinci Wide Gamut - DaVinci Resolve Studio 17 Public Beta 1. November 2020.
Blackmagic Design. Blackmagic Generation 5 Color Science. 2021.
Canon. EOS C500 Firmware Update. https://www.usa.canon.com/internet/portal/us/home/explore/product-showcases/cameras-and-lenses/cinema-eos-firmware/c500, 2014.
Canon. EOS C300 Mark II - EOS C300 Mark II Input Transform Version 2.0 (for Cinema Gamut / BT.2020). https://www.usa.canon.com/internet/portal/us/home/support/details/cameras/cinema-eos/eos-c300-mark-ii, 2016.
CIE Division 1 and CIE Division 8. CIE 248:2022 The CIE 2016 Colour Appearance Model for Colour Management Systems: CIECAM16. Commission Internationale de l'Eclairage, 2022. ISBN 978-3-902842-94-7.
CIE TC 1-32. CIE 109-1994 A Method of Predicting Corresponding Colours under Different Chromatic and Illuminance Adaptations. Commission Internationale de l'Eclairage, 1994. ISBN 978-3-900734-51-0.
CIE TC 1-36. CIE 170-1:2006 Fundamental Chromaticity Diagram with Physiological Axes - Part 1. Commission Internationale de l'Eclairage, 2006. ISBN 978-3-901906-46-6.
CIE TC 1-38. 9. INTERPOLATION. In CIE 167:2005 Recommended Practice for Tabulating Spectral Data for Use in Colour Computations, pages 14–19. 2005.
CIE TC 1-38. 9.2.4 Method of interpolation for uniformly spaced independent variable. In CIE 167:2005 Recommended Practice for Tabulating Spectral Data for Use in Colour Computations, pages 1–27. 2005.
CIE TC 1-38. Table V. Values of the c-coefficients of Equ.s 6 and 7. In CIE 167:2005 Recommended Practice for Tabulating Spectral Data for Use in Colour Computations, pages 19. 2005.
CIE TC 1-38. EXTRAPOLATION. In CIE 167:2005 Recommended Practice for Tabulating Spectral Data for Use in Colour Computations, pages 19–20. 2005.
CIE TC 1-48. 3.1 Recommendations concerning standard physical data of illuminants. In CIE 015:2004 Colorimetry, 3rd Edition, pages 12–13. 2004.
CIE TC 1-48. 9.1 Dominant wavelength and purity. In CIE 015:2004 Colorimetry, 3rd Edition, pages 32–33. 2004.
CIE TC 1-48. Extrapolation. In CIE 015:2004 Colorimetry, 3rd Edition, pages 24. 2004.
CIE TC 1-48. The evaluation of whiteness. In CIE 015:2004 Colorimetry, 3rd Edition, pages 24. 2004.
CIE TC 1-48. APPENDIX E. INFORMATION ON THE USE OF PLANCK'S EQUATION FOR STANDARD AIR. In CIE 015:2004 Colorimetry, 3rd Edition, pages 77–82. 2004.
CIE TC 1-48. CIE 015:2004 Colorimetry, 3rd Edition. Commission Internationale de l'Eclairage, 2004. ISBN 978-3-901906-33-6.
CIE TC 1-48. CIE 1976 uniform chromaticity scale diagram (UCS diagram). In CIE 015:2004 Colorimetry, 3rd Edition, pages 24. 2004.
CIE TC 1-48. CIE 1976 uniform colour spaces. In CIE 015:2004 Colorimetry, 3rd Edition, pages 24. 2004.
CIE TC 1-48. EXPLANATORY COMMENTS - 5. In CIE 015:2004 Colorimetry, 3rd Edition, pages 68–68. 2004.
CIE TC 1-90. CIE 2017 colour fidelity index for accurate scientific use. Number 224 in Technical report / CIE. CIE Central Bureau, Vienna, 2017. ISBN 978-3-902842-61-9.
CIE. CIE Spectral Data.
CIE. CIE 15:2004 Tables Data. 2004.
Colblindor. Deuteranopia - Red-Green Color Blindness. http://www.color-blindness.com/deuteranopia-red-green-color-blindness/.
Colblindor. Protanopia - Red-Green Color Blindness. http://www.color-blindness.com/protanopia-red-green-color-blindness/.
Colblindor. Tritanopia - Blue-Yellow Color Blindness. http://www.color-blindness.com/tritanopia-blue-yellow-color-blindness/.
CVRL. Luminous efficiency. http://www.cvrl.org/lumindex.htm.
CVRL. New CIE XYZ functions transformed from the CIE (2006) LMS functions. http://cvrl.ioo.ucl.ac.uk/ciexyzpr.htm.
CVRL. Older CIE Standards. http://cvrl.ioo.ucl.ac.uk/cie.htm.
CVRL. Stiles & Burch individual 10-deg colour matching data. http://www.cvrl.org/stilesburch10_ind.htm.
CVRL. Stiles & Burch individual 2-deg colour matching data. http://www.cvrl.org/stilesburch2_ind.htm.
CVRL. CIE (2012) 10-deg XYZ "physiologically-relevant" colour matching functions. http://www.cvrl.org/database/text/cienewxyz/cie2012xyz10.htm.
CVRL. CIE (2012) 2-deg XYZ "physiologically-relevant" colour matching functions. http://www.cvrl.org/database/text/cienewxyz/cie2012xyz2.htm.
Digital Cinema Initiatives. Digital Cinema System Specification - Version 1.1. 2007.
Django Software Foundation. Slugify. 2022.
Dji. White Paper on D-Log and D-Gamut of DJI Cinema Color System. 2017.
Dolby. WHAT IS ICTCP? - INTRODUCTION. 2016.
EasyRGB. CMYK –$>$ CMY. http://www.easyrgb.com/index.php?X=MATH&H=14#text14.
EasyRGB. CMY –$>$ CMYK. http://www.easyrgb.com/index.php?X=MATH&H=13#text13.
EasyRGB. CMY –$>$ RGB. http://www.easyrgb.com/index.php?X=MATH&H=12#text12.
EasyRGB. HSL –$>$ RGB. http://www.easyrgb.com/index.php?X=MATH&H=19#text19.
EasyRGB. HSV –$>$ RGB. http://www.easyrgb.com/index.php?X=MATH&H=21#text21.
EasyRGB. RGB –$>$ CMY. http://www.easyrgb.com/index.php?X=MATH&H=11#text11.
EasyRGB. RGB –$>$ HSL. http://www.easyrgb.com/index.php?X=MATH&H=18#text18.
EasyRGB. RGB –$>$ HSV. http://www.easyrgb.com/index.php?X=MATH&H=20#text20.
European Broadcasting Union. EBU Tech 3213 - EBU Standard for Chromaticity Tolerances for Studio Monitors. August 1975.
European Color Initiative. ECI RGB v2. 2002.
FFmpeg Developers. FFmpeg::AVColorPrimaries. August 2022.
FFmpeg Developers. FFmpeg::AVColorSpace. August 2022.
FFmpeg Developers. FFmpeg::AVColorTransferCharacteristic. August 2022.
FiLMiC Inc. FiLMiC Pro - User Manual v6 - Revision 1. 2017.
Fujifilm. F-Log Data Sheet Ver.1.0. 2016.
GoPro, Haarm-Pieter Duiker, and Thomas Mansencal. Gopro.py. https://github.com/hpd/OpenColorIO-Configs/blob/master/aces_1.0.3/python/aces_ocio/colorspaces/gopro.py, 2016.
Javier Hernández-Andrés, Raymond L. Lee, and Javier Romero. Calculating correlated color temperatures across the entire gamut of daylight and skylight chromaticities. Applied Optics, 38(27):5703, September 1999. doi:10.1364/AO.38.005703.
Hewlett-Packard Development Company. Understanding the HP DreamColor LP2480zx DCI-P3 Emulation Color Space. 2009.
HunterLab. Hunter L,a,b Color Scale. 2008.
HunterLab. Illuminant Factors in Universal Software and EasyMatch Coatings. 2008.
HunterLab. Hunter Rd,a,b Color Scale - History and Application. 2012.
HutchColor. BestRGB (4 K).
HutchColor. DonRGB4 (4 K).
HutchColor. MaxRGB (4 K).
HutchColor. XtremeRGB (4 K).
IES Computer Committee and TM-27-14 Working Group. IES Standard Format for the Electronic Transfer of Spectral Data Electronic Transfer of Spectral Data. Illuminating Engineering Society, 2014. ISBN 978-0-87995-295-2.
International Color Consortium. Specification ICC.1:2010 (Profile version 4.3.0.0). 2010.
International Electrotechnical Commission. IEC 61966-2-1:1999 - Multimedia systems and equipment - Colour measurement and management - Part 2-1: Colour management - Default RGB colour space - sRGB. 1999.
International Organization for Standardization. INTERNATIONAL STANDARD ISO 7589-2002 - Photography - Illuminants for sensitometry - Specifications for daylight, incandescent tungsten and printer. 2002.
International Organization for Standardization. INTERNATIONAL STANDARD ISO 17321-1 - Graphic technology and photography - Colour characterisation of digital still cameras (DSCs) - Part 1: Stimuli, metrology and test procedures. 2012.
International Organization for Standardization. INTERNATIONAL STANDARD ISO/IEC 23001-8 - Information technology - MPEG systems technologies - Part 8: Coding-independent code points. 2013.
International Organization for Standardization. INTERNATIONAL STANDARD ISO/IEC 14496-10 - Information technology - Coding of audio-visual objects - Part 10: Advanced video coding. 2020.
International Organization for Standardization. INTERNATIONAL STANDARD ISO/IEC 23091-2 - Information technology - Coding- independent code points - Part 2: Video. 2021.
International Telecommunication Union. Recommendation ITU-R BT.1361 - Worldwide unified colorimetry and related characteristics of future television and imaging systems. 1998.
International Telecommunication Union. Recommendation ITU-R BT.470-6 - CONVENTIONAL TELEVISION SYSTEMS. 1998.
International Telecommunication Union. Recommendation ITU-R BT.1886 - Reference electro-optical transfer function for flat panel displays used in HDTV studio production BT Series Broadcasting service. 2011.
International Telecommunication Union. Recommendation ITU-R BT.601-7 - Studio encoding parameters of digital television for standard 4:3 and wide-screen 16:9 aspect ratios. 2011.
International Telecommunication Union. Recommendation ITU-T T.871 - Information technology - Digital compression and coding of continuous-tone still images: JPEG File Interchange Format (JFIF). 2011.
International Telecommunication Union. Recommendation ITU-R BT.2020 - Parameter values for ultra-high definition television systems for production and international programme exchange. 2015.
International Telecommunication Union. Recommendation ITU-R BT.709-6 - Parameter values for the HDTV standards for production and international programme exchange BT Series Broadcasting service. 2015.
International Telecommunication Union. Report ITU-R BT.2246-4 - The present state of ultra-high definition television BT Series Broadcasting service. 2015.
International Telecommunication Union. Recommendation ITU-R BT.2100-1 - Image parameter values for high dynamic range television for use in production and international programme exchange. 2017.
International Telecommunication Union. Recommendation ITU-R BT.2100-2 - Image parameter values for high dynamic range television for use in production and international programme exchange. 2018.
International Telecommunication Union. Recommendation ITU-R BT.2124-0 - Objective metric for the assessment of the potential visibility of colour differences in television. 2019.
International Telecommunication Union. Recommendation ITU-T H.273 - Coding-independent code points for video signal type identification. 2021.
Laurent. Reproducibility of python pseudo-random numbers across systems and versions? http://stackoverflow.com/questions/8786084/reproducibility-of-python-pseudo-random-numbers-across-systems-and-versions, 2012.
Leica Camera AG. Leica L-Log Reference Manual. 2022.
Francisco Martínez-Verdú, Esther Perales, Elisabet Chorro, Dolores de Fez, Valentín Viqueira, and Eduardo Gilabert. Computation and visualization of the MacAdam limits for any lightness, hue angle, and light source. Journal of the Optical Society of America A, 24(6):1501, June 2007. doi:10.1364/JOSAA.24.001501.
Munsell Color Science. Macbeth Colorchecker.
Munsell Color Science. Munsell Colours Data. http://www.cis.rit.edu/research/mcsl2/online/munsell.php.
National Electrical Manufacturers Association. Digital Imaging and Communications in Medicine (DICOM) Part 14: Grayscale Standard Display Function. 2004.
Nikon. N-Log Specification Document - Version 1.0.0. 2018.
Panasonic. VARICAM V-Log/V-Gamut. 2014.
RED Digital Cinema. White Paper on REDWideGamutRGB and Log3G10. 2017.
Renewable Resource Data Center. Reference Solar Spectral Irradiance: ASTM G-173. http://rredc.nrel.gov/solar/spectra/am1.5/ASTMG173/ASTMG173.html, 2003.
Rising Sun Research. cineSpace LUT Library. https://sourceforge.net/projects/cinespacelutlib/.
Saeedn. Extend a line segment a specific distance. http://stackoverflow.com/questions/7740507/extend-a-line-segment-a-specific-distance.
sastanin. How to make scipy.interpolate give an extrapolated result beyond the input range? http://stackoverflow.com/a/2745496/931625.
Society of Motion Picture and Television Engineers. RP 177:1993 - Derivation of Basic Television Color Equations. Volume RP 177:199. The Society of Motion Picture and Television Engineers, January 1993. ISBN 978-1-61482-191-5. doi:10.5594/S9781614821915.
Society of Motion Picture and Television Engineers. ANSI/SMPTE 240M-1995 - Signal Parameters - 1125-Line High-Definition Production Systems. 1999.
Society of Motion Picture and Television Engineers. RP 145:2004: SMPTE C Color Monitor Colorimetry. Volume RP 145:200. The Society of Motion Picture and Television Engineers, January 2004. ISBN 978-1-61482-164-9. doi:10.5594/S9781614821649.
Society of Motion Picture and Television Engineers. SMPTE ST 2084:2014 - Dynamic Range Electro-Optical Transfer Function of Mastering Reference Displays. 2014. doi:10.5594/SMPTE.ST2084.2014.
Society of Motion Picture and Television Engineers. ST 428-1:2019 - D-Cinema Distribution Master — Image Characteristic. 2019. doi:10.5594/SMPTE.ST428-1.2019.
Sony Corporation. S-Gamut3_S-Gamut3Cine_Matrix.xlsx.
Sony Corporation. S-Log Whitepaper.
Sony Corporation. Technical Summary for S-Gamut3.Cine/S-Log3 and S-Gamut3/S-Log3.
Sony Corporation. S-Log2 Technical Paper. 2012.
Sony Electronics Corporation. IDT.Sony.Venice_SLog3_SGamut3Cine.ctl. 2020.
Sony Electronics Corporation. IDT.Sony.Venice_SLog3_SGamut3.ctl. 2020.
Sony Imageworks. Make.py. https://github.com/imageworks/OpenColorIO-Configs/blob/master/nuke-default/make.py, 2012.
The Academy of Motion Picture Arts and Sciences. Academy Spectral Similarity Index (SSI): Overview. 2019.
The Academy of Motion Picture Arts and Sciences, Science and Technology Council, and Academy Color Encoding System (ACES) Project Subcommittee. Academy Color Encoding System. http://www.oscars.org/science-technology/council/projects/aces.html.
The Academy of Motion Picture Arts and Sciences, Science and Technology Council, and Academy Color Encoding System (ACES) Project Subcommittee. ACESutil.Lin_to_Log2_param.ctl. https://github.com/ampas/aces-dev/blob/518c27f577e99cdecfddf2ebcfaa53444b1f9343/transforms/ctl/utilities/ACESutil.Lin_to_Log2_param.ctl.
The Academy of Motion Picture Arts and Sciences, Science and Technology Council, and Academy Color Encoding System (ACES) Project Subcommittee. ACESutil.Log2_to_Lin_param.ctl. https://github.com/ampas/aces-dev/blob/518c27f577e99cdecfddf2ebcfaa53444b1f9343/transforms/ctl/utilities/ACESutil.Log2_to_Lin_param.ctl.
The Academy of Motion Picture Arts and Sciences, Science and Technology Council, and Academy Color Encoding System (ACES) Project Subcommittee. Specification S-2013-001 - ACESproxy, an Integer Log Encoding of ACES Image Data. 2013.
The Academy of Motion Picture Arts and Sciences, Science and Technology Council, and Academy Color Encoding System (ACES) Project Subcommittee. Specification S-2014-003 - ACEScc, A Logarithmic Encoding of ACES Data for use within Color Grading Systems. 2014.
The Academy of Motion Picture Arts and Sciences, Science and Technology Council, and Academy Color Encoding System (ACES) Project Subcommittee. Technical Bulletin TB-2014-004 - Informative Notes on SMPTE ST 2065-1 - Academy Color Encoding Specification (ACES). 2014.
The Academy of Motion Picture Arts and Sciences, Science and Technology Council, and Academy Color Encoding System (ACES) Project Subcommittee. Technical Bulletin TB-2014-012 - Academy Color Encoding System Version 1.0 Component Names. 2014.
The Academy of Motion Picture Arts and Sciences, Science and Technology Council, and Academy Color Encoding System (ACES) Project Subcommittee. Procedure P-2013-001 - Recommended Procedures for the Creation and Use of Digital Camera System Input Device Transforms (IDTs). 2015.
The Academy of Motion Picture Arts and Sciences, Science and Technology Council, and Academy Color Encoding System (ACES) Project Subcommittee. Specification S-2014-006 - Common LUT Format (CLF) - A Common File Format for Look-Up Tables. 2020.
The Academy of Motion Picture Arts and Sciences, Science and Technology Council, and Academy Color Encoding System (ACES) Project. Specification S-2016-001 - ACEScct, A Quasi-Logarithmic Encoding of ACES Data for use within Color Grading Systems. http://j.mp/S-2016-001, 2016.
The Academy of Motion Picture Arts and Sciences, Science and Technology Council, and Academy Color Encoding System (ACES) Project Subcommittee. Specification S-2014-004 - ACEScg - A Working Space for CGI Render and Compositing. 2015.
Wikipedia. Ellipse. https://en.wikipedia.org/wiki/Ellipse.
Wikipedia. Approximation. http://en.wikipedia.org/wiki/Color_temperature#Approximation, 2001.
Wikipedia. Color temperature. http://en.wikipedia.org/wiki/Color_temperature, 2001.
Wikipedia. Luminance. https://en.wikipedia.org/wiki/Luminance, 2001.
Wikipedia. Rayleigh scattering. http://en.wikipedia.org/wiki/Rayleigh_scattering, 2001.
Wikipedia. Lagrange polynomial - Definition. https://en.wikipedia.org/wiki/Lagrange_polynomial#Definition, 2003.
Wikipedia. Luminosity function. https://en.wikipedia.org/wiki/Luminosity_function#Details, 2003.
Wikipedia. Mean squared error. https://en.wikipedia.org/wiki/Mean_squared_error, 2003.
Wikipedia. Michaelis-Menten kinetics. https://en.wikipedia.org/wiki/Michaelis%E2%80%93Menten_kinetics, 2003.
Wikipedia. Rayleigh–Jeans law. https://en.wikipedia.org/wiki/Rayleigh–Jeans_law, 2003.
Wikipedia. Vandermonde matrix. https://en.wikipedia.org/wiki/Vandermonde_matrix, 2003.
Wikipedia. HSL and HSV. http://en.wikipedia.org/wiki/HSL_and_HSV, 2003.
Wikipedia. Peak signal-to-noise ratio. https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio, 2004.
Wikipedia. Surfaces. http://en.wikipedia.org/wiki/Gamut#Surfaces, 2004.
Wikipedia. Whiteness. http://en.wikipedia.org/wiki/Whiteness, 2004.
Wikipedia. Wide-gamut RGB color space. http://en.wikipedia.org/wiki/Wide-gamut_RGB_color_space, 2004.
Wikipedia. YCbCr. https://en.wikipedia.org/wiki/YCbCr, 2004.
Wikipedia. Lanczos resampling. https://en.wikipedia.org/wiki/Lanczos_resampling, 2005.
Wikipedia. Luminous Efficacy. https://en.wikipedia.org/wiki/Luminous_efficacy, 2005.
Wikipedia. Mesopic weighting function. http://en.wikipedia.org/wiki/Mesopic_vision#Mesopic_weighting_function, 2005.
Wikipedia. CIE 1931 color space. http://en.wikipedia.org/wiki/CIE_1931_color_space, 2005.
Wikipedia. ISO 31-11. https://en.wikipedia.org/wiki/ISO_31-11, 2005.
Wikipedia. List of common coordinate transformations. http://en.wikipedia.org/wiki/List_of_common_coordinate_transformations, 2006.
Wikipedia. White points of standard illuminants. http://en.wikipedia.org/wiki/Standard_illuminant#White_points_of_standard_illuminants, 2006.
Wikipedia. Lightness. http://en.wikipedia.org/wiki/Lightness, 2007.
Wikipedia. The reverse transformation. http://en.wikipedia.org/wiki/CIELUV#The_reverse_transformation, 2007.
Wikipedia. CAT02. http://en.wikipedia.org/wiki/CIECAM02#CAT02, 2007.
Wikipedia. CIECAM02. http://en.wikipedia.org/wiki/CIECAM02, 2007.
Wikipedia. CIELUV. http://en.wikipedia.org/wiki/CIELUV, 2007.
Wikipedia. Color difference. http://en.wikipedia.org/wiki/Color_difference, 2008.
Wikipedia. Relation to CIE XYZ. http://en.wikipedia.org/wiki/CIE_1960_color_space#Relation_to_CIE_XYZ, 2008.
Wikipedia. CIE 1960 color space. http://en.wikipedia.org/wiki/CIE_1960_color_space, 2008.
Wikipedia. CIE 1964 color space. http://en.wikipedia.org/wiki/CIE_1964_color_space, 2008.
Wikipedia. HCL color space. https://en.wikipedia.org/wiki/HCL_color_space, 2015.
X-Rite. New color specifications for ColorChecker SG and Classic Charts. http://xritephoto.com/ph_product_overview.aspx?ID=938&Action=Support&SupportID=5884#, 2016.
X-Rite and Pantone. Color iQC and Color iMatch Color Calculations Guide. 2012.
Indirect References#
Some extra references used in the codebase but not directly part of the public api: