<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Materials from book :: Coding For Chemists</title>
    <link>https://codingforchemistsbook.com/book_material/index.html</link>
    <description></description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <managingEditor>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</managingEditor>
    <webMaster>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</webMaster>
    <atom:link href="https://codingforchemistsbook.com/book_material/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Chapter 0</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-0/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-0/index.html</guid>
      <description>Data There is no data required for or used in Chapter 0.&#xA;Code blocks from chapter # A function to calculate the volumes of stock solutions needed to create solutions with given catalyst, HCl, and ionic strength # All concentrations are in M # The final volume is in mL and assumed to be 1 unless provided def calcPlateVols(conc_cat, conc_HCl, I, vol_final = 1): # Define our stock solutions conc_stock_cat = 0.1 # M conc_stock_HCl = 6 # M conc_stock_NaCl = 3 # M # Calculate the volumes needed vol_cat = conc_cat / conc_stock_cat * vol_final vol_HCl = conc_HCl / conc_stock_HCl * vol_final vol_NaCl = (I - conc_HCl) / conc_stock_NaCl * vol_final # Calculate the water needed to make up 1 mL vol_water = vol_final - vol_cat - vol_HCl - vol_NaCl print(&#39;[ ] catalyst solution (mL)\n&#39;, vol_cat) print(&#39;[ ] HCl (mL)\n&#39;, vol_HCl) print(&#39;[ ] NaCl (mL)\n&#39;, vol_NaCl) print(&#39;[ ] water (mL)\n&#39;, vol_water) Solutions to Exercises Giving your computer instructions using Python commands Exercise 0 Using Anaconda Navigator, verify that the following Python libraries are installed, and if not, install them. If you want directions you can use the resources at website.</description>
    </item>
    <item>
      <title>Chapter 1</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-1/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-1/index.html</guid>
      <description>Data There is no data required for or used in Chapter 1.&#xA;Code blocks from chapter # Version 0.2.0 - 240605 - Took the function and wrote a script to calculate volumes of stock solutions needed for a well plate of arbitrary size. # Version 0.1.0 - 240605 - A function that calculates volumes of stock solutions needed for a well in a well plate. import numpy as np # A function to calculate the volumes of stock solutions needed to create solutions with given catalyst, HCl, and ionic strength # All concentrations are in M # The final volume is in mL and assumed to be 1 unless provided def calcPlateVols(conc_cat, conc_HCl, I, vol_final = 1): # Define our stock solutions conc_stock_cat = 0.1 # M conc_stock_HCl = 6 # M conc_stock_NaCl = 3 # M # Calculate the volumes needed vol_cat = conc_cat / conc_stock_cat * vol_final vol_HCl = conc_HCl / conc_stock_HCl * vol_final vol_NaCl = (I - conc_HCl) / conc_stock_NaCl * vol_final # Calculate the water needed to make up 1 mL vol_water = vol_final - vol_cat - vol_HCl - vol_NaCl print(&#39;[ ] catalyst solution (mL)\n&#39;, vol_cat) print(&#39;[ ] HCl (mL)\n&#39;, vol_HCl) print(&#39;[ ] NaCl (mL)\n&#39;, vol_NaCl) print(&#39;[ ] water (mL)\n&#39;, vol_water) # Define the dimensions of our well plate rows = 4 cols = 6 # Define our experimental concentrations and ionic strengths conc_cat = 0.01 # M conc_HCl_start = 0.0 # M conc_HCl_end = 0.01 # M I_start = 0.02 # M I_end = 0.2 # M # Get the concentration of catalyst in each well cat = conc_cat * np.ones((rows, cols)) # Get the concentration of HCl in each well # We will do this by multiplying two 1D arrays to make a 2D array # First, define the concentration of HCl in each row MHCl_row = np.linspace(conc_HCl_start, conc_HCl_end, cols) # Next, each row should be the same, so make an array of all ones MHCl_col = np.ones(rows) # Finally, we can outer multiply these two arrays to make a 2D array MHCl = np.outer(MHCl_col, MHCl_row) # Now we need to get the ionic strengths in each well by a similar method # Here, the columns are all the same instead of the rows # First, make a row that is all ones ionic_row = np.ones(cols) # Next, make an array that represents one column ionic_col = np.linspace(I_start, I_end, rows) # Finally, outer multiply to get the 2D array ionic = np.outer(ionic_col, ionic_row) # Calculate and print the volumes calcPlateVols(cat, MHCl, ionic) Solutions to Exercises Targeted exercises Importing libraries to add capabilities to Python Exercise 0 In the IDLE or console of an IDE, import Numpy, and calculate the following. It may be helpful to know the following about Numpy: if you import Numpy as np then base-10 logarithms are accessed using np.log10, the value of $\pi$ is accessed using np.pi, the value of $e$ is accessed using np.e.</description>
    </item>
    <item>
      <title>Chapter 2</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-2/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-2/index.html</guid>
      <description>Data Download Data for Chapter 2&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter &#39;&#39;&#39; A program to plot one uv-vis spectrum from a .csv file Requires: a .csv file with col 1 as wavelength and col 2 as intensity Written by: Ben Lear and Chris Johnson (authors@codechembook.com) v1.0.0 - 250131 - initial version &#39;&#39;&#39; import numpy as np # needed for genfromtxt() from plotly.subplots import make_subplots # needed to plot from codechembook.quickTools import quickOpenFilename # Specify the name and place for data data_file = quickOpenFilename() # Import wavelength (nm) and absorbance to plot as a numpy array x_data, y_data = np.genfromtxt(data_file, delimiter = &#39;,&#39;, skip_header = 1, unpack = True) # Construct the plot - here UVvis holds the figure object UVvis = make_subplots() # make a figure object UVvis.add_scatter(x = x_data, y = y_data, # make a scatter trace object mode = &#39;lines&#39;, # this ensures that we will only get lines and not markers showlegend = False) # this prevents a legend from being automatically created # Format the figure UVvis.update_yaxes(title = &#39;absorbance&#39;) UVvis.update_xaxes(title = &#39;wavelength /nm&#39;, range = [270, 1100]) UVvis.update_layout(template = &#39;simple_white&#39;) # set the details for the appearance # Display the figure UVvis.show(&#39;browser+png&#39;) # show an interactive plot and in the spyder Plots window # Save the spectra using the input file name but replacing .csv with the image file form UVvis.write_image(data_file.with_suffix(&#39;.svg&#39;)) # save in the same location as the data file UVvis.write_image(data_file.with_suffix(&#39;.png&#39;)) # save in the same location as the data file Solutions to Exercises Targeted exercises Exploiting_built-in_methods_to_manipulate_data_stored_in_objects Make a Numpy array that has all integers from 1 to 14, including both 1 and 14. Assign this to the variable pH. Make a new Numpy array, assigned to the \cite{variable} conc_H, that is the concentration of protons that corresponds to each p$H$ value. For both array objects, use \cite{methods} of Numpy arrays to accomplish the following (you may need to read the online Numpy documentation):</description>
    </item>
    <item>
      <title>Chapter 3</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-3/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-3/index.html</guid>
      <description>Data Download Data for Chapter 3&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter &#39;&#39;&#39; A program to plot multiple uv-vis spectra from .csv files from a titration experiment Requires: .csv files with col 1 as wavelength and col 2 as intensity Written by: Ben Lear and Chris Johnson (authors@codechembook.com) v1.0.0 - 250131 - initial version &#39;&#39;&#39; import numpy as np from plotly.subplots import make_subplots from codechembook.quickTools import quickOpenFilenames # First, we need the names of the files that we want to plot # Ask the user to select the files and then sort the resulting list data_files = quickOpenFilenames(filetypes = &#39;CSV files, *.csv&#39;) sorted_data_files = sorted(data_files) # Next, we will process one file at a time and add it to the plot titration_series = make_subplots() # start a blank plotly figure object # Read the data in one file and add it as a scatter trace to the figure object for file in sorted_data_files: # loop through the data files one at a time # Read the data and store it in temporary x and y variables x_data, y_data = np.genfromtxt(file, delimiter = &#39;,&#39;, skip_header = 1, unpack = True) # Add data as scatter trace with formatted lines and exclude from legend titration_series.add_scatter(x = x_data, y = y_data, line = dict(color = &#39;gray&#39;, width = 1, dash = &#39;dot&#39;), name = file.stem + &#39; eqs&#39;, showlegend=False) # Adjust the appearance of only the first and last traces to highlight titration_series.update_traces(selector = 0, # specify the initial trace line = dict(color = &#39;darkcyan&#39;, width = 2, dash = &#39;solid&#39;), showlegend = True, name = &#39;initial&#39;) titration_series.update_traces(selector = -1, # specify the final trace line = dict(color = &#39;darkred&#39;, width = 2, dash = &#39;solid&#39;), showlegend = True, name = &#39;final&#39;) # Move the initial trace to the end of the data, so that it is drawn on top titration_series.data = titration_series.data[1:] + titration_series.data[:1] # Format the plot area and then show it and then save it titration_series.update_layout(template = &#39;simple_white&#39;) titration_series.update_xaxes(title = &#39;wavelength /nm&#39;, range = [270, 1100]) titration_series.update_yaxes(title = &#39;absorbance&#39;, range = [0, 4.5]) titration_series.show(&#39;png+browser&#39;) titration_series.write_image(&#39;titration.png&#39;, width = 3*300, height = 2*300) Solutions to Exercises Targeted exercises Making ordered collections of data more flexible using list objects Exercise 0 Make a list that contains no items.</description>
    </item>
    <item>
      <title>Chapter 4</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-4/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-4/index.html</guid>
      <description>Data Chapter 4 usesthe same data as for Chapter 3&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter from rdkit import Chem from rdkit.Chem import Draw # Define the SMILES for acetate acetate_smiles = &#34;CC(=O)[O-]&#34; # Convert the SMILES to an RDKit molecule object acetate_mol = Chem.MolFromSmiles(acetate_smiles) # Display the structure Draw.ShowMol(acetate_mol) def velocity_at_time(time): &#39;&#39;&#39; A function to find the velocity of a dropped object at a given time. Required Params: time (float): the time that has elapsed since the drop Returns: v (float): the velocity at that time &#39;&#39;&#39; v = 9.8 * time**2 return v v1 = velocity_at_time(2) print(v1) print(velocity_at_time(3)) &#39;&#39;&#39; A program to plot multiple uv-vis spectra from .csv files from a titration experiment Requires: .csv files with col 1 as wavelength and col 2 as intensity Written by: Ben Lear and Chris Johnson (authors@codechembook.com) v1.1.0 - 250201 - added structure drawings showing protonation reaction v1.0.0 - 250131 - initial version &#39;&#39;&#39; import numpy as np from plotly.subplots import make_subplots from codechembook.quickTools import quickOpenFilenames # First, we need the names of the files that we want to plot # Ask the user to select the files and then sort the resulting list data_files = quickOpenFilenames(filetypes = &#39;CSV files, *.csv&#39;) sorted_data_files = sorted(data_files) # Next, we will process one file at a time and add it to the plot titration_series = make_subplots() # start a blank plotly figure object # Read the data in one file and add it as a scatter trace to the figure object for file in sorted_data_files: # loop through the data files one at a time # Read the data and store it in temporary x and y variables x_data, y_data = np.genfromtxt(file, delimiter = &#39;,&#39;, skip_header = 1, unpack = True) # Add data as scatter trace with formatted lines and exclude from legend titration_series.add_scatter(x = x_data, y = y_data, line = dict(color = &#39;gray&#39;, width = 1, dash = &#39;dot&#39;), name = file.stem + &#39; eqs&#39;, showlegend=False) # Adjust the appearance of only the first and last traces to highlight titration_series.update_traces(selector = 0, # specify the initial trace line = dict(color = &#39;darkcyan&#39;, width = 2, dash = &#39;solid&#39;), showlegend = True, name = &#39;initial&#39;) titration_series.update_traces(selector = -1, # specify the final trace line = dict(color = &#39;darkred&#39;, width = 2, dash = &#39;solid&#39;), showlegend = True, name = &#39;final&#39;) # Move the initial trace to the end of the data, so that it is drawn on top titration_series.data = titration_series.data[1:] + titration_series.data[:1] # Format the plot area and then show it and then save it titration_series.update_layout(template = &#39;simple_white&#39;) titration_series.update_xaxes(title = &#39;wavelength /nm&#39;, range = [270, 1100]) titration_series.update_yaxes(title = &#39;absorbance&#39;, range = [0, 4.5]) from rdkit import Chem from rdkit.Chem import Draw from rdkit.Chem.Draw.MolDrawing import DrawingOptions import base64 from codechembook.symbols.typesettingHTML import textsup def make_mol_uri(smiles, color, bonds_wide, bonds_tall): &#39;&#39;&#39; Function to generate an svg of molecular drawing using rdkit and then return it in a format that is appropriate for inclusion in a Plotly figure object. REQUIRED PARAMETERS smiles (string): Valid chemical smiles string. color (tuple): Color designated in the rgb format. bonds_wide (int): Specifies how many bonds wide the structure is bonds_tall (int): Specifies how many bonds tall the structure is RETURNS (bytes): Uri of the image for use in Plotly. &#39;&#39;&#39; # Create an RDKit molecule object from a SMILES string mol = Chem.MolFromSmiles(smiles) # Create a dictionary that defines the color of all atoms (here, the same) for key in range(1, 119): # 119 is exclusive, so it goes up to 118 DrawingOptions.elemDict[key] = color # Set options for drawing drawer = Draw.MolDraw2DSVG(50*bonds_wide, 50*bonds_tall) # create a canvas to draw on drawer.drawOptions().updateAtomPalette(DrawingOptions.elemDict) # the colors of atoms drawer.drawOptions().setBackgroundColour((0, 0, 0, 0)) # the canvas color # Now that the options are set, we can process the molecule drawing drawer.DrawMolecule(mol) # create the drawing instructions drawer.FinishDrawing() # use the drawing instructions to make the drawing # Generate the SVG instructions then convert to a Base64-encoded data URI with a Plotly-required preamble svg = drawer.GetDrawingText().replace(&#39;svg:&#39;, &#39;&#39;) svg_data_uri = f&#34;data:image/svg+xml;base64,{base64.b64encode(svg.encode(&#39;utf-8&#39;)).decode(&#39;utf-8&#39;)}&#34; return svg_data_uri # return the binary code for the image # Define the SMILES string for the base and acid species base_smiles = &#39;C1(N=CC=N2)=C2O[Ni]3(O1)OC4=NC=CN=C4O3&#39; acid_smiles = &#39;[H][N+]1=C2O[Ni]3(OC2=NC=C1)OC4=NC=CN=C4O3&#39; # Define colors for the base and acid species base_color = (0, 139/255, 139/255) # dark cyan acid_color = (139/255, 0, 0) # dark red # Loop through both molecules and create their respective images for species, color, position in zip( [base_smiles, acid_smiles], # the SMILES strings [base_color, acid_color], # the colors [580, 870]): # the positions of the images on the plot area # Add image to plot titration_series.add_layout_image( dict(source=make_mol_uri(species, color, bonds_wide = 8, bonds_tall = 3), xref=&#39;x&#39;, yref=&#39;y&#39;, # references for coordinate origin x=position, y=2.45, # x- and y-coordinate position of the image xanchor=&#39;center&#39;, yanchor=&#39;top&#39;,# alignment of image wrt x,y coords sizex=200, sizey=1)) # width and height of the image # Add an arrow to denote the reaction titration_series.add_annotation( dict(ax = 680, ay = 2, # arrow start coordinates x = 770, y = 2, # arrow end coordinates axref = &#39;x&#39;, ayref = &#39;y&#39;, xref = &#39;x&#39;, yref = &#39;y&#39;, # references for coordinates showarrow = True, arrowhead = 1, arrowwidth = 1.5, arrowcolor = &#39;grey&#39;)) # arrow format # Add H+ label titration_series.add_annotation( dict(text = &#39;H&#39; + textsup(&#39;+&#39;), font = dict(color = &#39;grey&#39;), xref = &#39;x&#39;, yref = &#39;y&#39;, # references for coordinate origin x = (770+680)/2, y = 2, # x- and y-coordinate position of the image xanchor = &#39;center&#39;, yanchor = &#39;bottom&#39;, # location of text box wrt position showarrow = False)) # we want no arrow associated with this annotation # eliminate the legend entries for all traces titration_series.update_traces(showlegend = False) # Now output the plot titration_series.show(&#39;png+browser&#39;) titration_series.write_image(&#39;titration.png&#39;, width = 6*300, height = 4*300) Solutions to Exercises Targeted exercises Accessing items from dictionaries using keys Exercise 0 Starting with the dictionary that you made in Exercise 22 of Chapter 3, print the following properties for the given solvent:</description>
    </item>
    <item>
      <title>Chapter 5</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-5/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-5/index.html</guid>
      <description>Data Chapter 5 usesDownload the data for Chapter 5&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter &#39;&#39;&#39; A script to fit a calibration experiment to a line, starting wtih the UV/vis spectra Requires: .csv files with col 1 as wavelength and col 2 as intensity filenames should contain the concentration after an &#39;_&#39; Written by: Ben Lear and Chris Johnson (authors@codechembook.com) v1.0.0 - 250204 - initial version &#39;&#39;&#39; import numpy as np from lmfit.models import LinearModel from plotly.subplots import make_subplots from codechembook.quickTools import quickSelectFolder, quickHTMLFormula import codechembook.symbols as cs # identify the files you wish to plot and the place you want to save them at print(&#39;Select the folder with the calibration UVvis files.&#39;) filenames = sorted(quickSelectFolder().glob(&#39;*csv&#39;)) # Ask the user what wavelength to use for calibration l_max = float(input(&#39;What is the position of the feature of interest? &#39;)) # Loop through the file names, read the files, and extract the data into lists conc, absorb = [], [] # empty lists that will hold concentrations and absorbances for f in filenames: conc.append(float(f.stem.split(&#39;_&#39;)[1])) # get concentration from file name and add to list temp_x, temp_y = np.genfromtxt(f, unpack = True, delimiter=&#39;,&#39;, skip_header = 1) # read file l_max_index = np.argmin(abs(temp_x - l_max)) # Find index of x data point closest to l_max absorb.append(temp_y[l_max_index]) # get the closest absorbance value and add to list # Set up and perform a linear fit to the calibration data lin_mod = LinearModel() # create an instance of the linear model object pars = lin_mod.guess(absorb, x=conc) # have lmfit guess at initial values result = lin_mod.fit(absorb, pars, x=conc) # fit using these initial values print(result.fit_report()) # print out the results of the fit # Print out the molar absorptivity print(f&#39;The molar absorptivity is {result.params[&#34;slope&#34;].value / 1:5.2f} {cs.math.plusminus} {result.params[&#34;slope&#34;].stderr:4.2f} M{cs.typography.sup_minus}{cs.typography.sup_1}cm{cs.typography.sup_minus}{cs.typography.sup_1}&#39;) # Construct a plot with two subplots. Pane 1 contains the best fit and data, pane 2 the residual fig = make_subplots(rows = 2, cols = 1) # make a blank figure object that has two subplots # Add trace objects for the best fit, the data, and the residualto the plot fig.add_scatter(x = result.userkws[&#39;x&#39;], y = result.best_fit, mode = &#39;lines&#39;, showlegend=False, row = 1, col = 1) fig.add_scatter(x = result.userkws[&#39;x&#39;], y = result.data, mode = &#39;markers&#39;, showlegend=False, row =1, col = 1) fig.add_scatter(x = result.userkws[&#39;x&#39;], y = -1*result.residual, showlegend=False, row = 2, col = 1) # Create annotation for the slope and intercept values and uncertainties as a string annotation_string = f&#39;&#39;&#39; slope = {result.params[&#34;slope&#34;].value:.2e} {cs.math.plusminus} {result.params[&#34;slope&#34;].stderr:.2e}&lt;br&gt; intercept = {result.params[&#34;intercept&#34;].value:.2e} {cs.math.plusminus} {result.params[&#34;intercept&#34;].stderr:.2e}&lt;br&gt; R{cs.typography.sup_2} = {result.rsquared:.3f}&#39;&#39;&#39; fig.add_annotation(text = annotation_string, x = np.min(result.userkws[&#39;x&#39;]), y = result.data[-1], xanchor = &#39;left&#39;, yanchor = &#39;top&#39;, align = &#39;left&#39;, showarrow = False, ) # Create annotation for the extinction coefficient and uncertainty fig.add_annotation(text = f&#39;{cs.greek.epsilon} = {result.params[&#34;slope&#34;].value:5.2f} {cs.math.plusminus} {result.params[&#34;slope&#34;].stderr:4.2f} M&lt;sup&gt;-1&lt;/sup&gt;cm&lt;sup&gt;-1&lt;/sup&gt;&#39;, x = np.max(result.userkws[&#39;x&#39;]), y = result.data[0], xanchor = &#39;right&#39;, yanchor = &#39;top&#39;, align = &#39;right&#39;, showarrow = False, ) # Format the axes and the plot, then show it fig.update_xaxes(title = &#39;concentration /M&#39;) fig.update_yaxes(title = f&#39;absorbance @ {l_max} nm&#39;, row = 1, col = 1) fig.update_yaxes(title = &#39;residual absorbance&#39;, row = 2, col = 1) fig.update_layout(template = &#39;simple_white&#39;, title = f&#39;calibration for {quickHTMLFormula(&#34;(C4H2N2S2)2Ni&#34;)}&#39;) fig.show(&#39;png&#39;) # -*- coding: utf-8 -*- &#34;&#34;&#34; Created on Tue Dec 17 13:51:22 2024 @author: benle &#34;&#34;&#34; import numpy as np from pathlib import Path from lmfit.models import LinearModel from plotly.subplots import make_subplots from codechembook.quickTools import quickSelectFolder import codechembook.symbols as cs # identify the files you wish to plot and the place you want to save them at filenames = quickSelectFolder().glob(&#34;*csv&#34;) filenames = list(filenames) # at this point, we have a sorted list of filenames # now, extract the information we want from the files. conc, absorb = [], [] # empty lists that will hold concentrations and absorbances for f in filenames: # go through the file names print(f) x, y = np.genfromtxt(f, delimiter = &#34;,&#34;, unpack=True) fig = make_subplots() fig.add_scatter(x = x, y = y, line = dict(color = &#34;red&#34;)) fig.update_yaxes(title = &#34;absorbance&#34;) fig.update_xaxes(title = &#34;wavelength /nm&#34;) fig.update_layout(template = &#34;none&#34;) fig.show(&#34;png&#34;) fig.write_image(Path(f).with_suffix(&#34;.png&#34;)) Solutions to Exercises Targeted exercises Prompting the user for information using input Exercise 0 Using the dictionary of solvent properties that you made in Exercise 0 from Chapter 4, write a code that asks the user for a solvent, then asks the user for a property, and then prints a sentence that tells the user the value of that property for the solvent they chose.</description>
    </item>
    <item>
      <title>Chapter 6</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-6/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-6/index.html</guid>
      <description>Data Download Data for Chapter 6&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter &#39;&#39;&#39; Fit data to multiple gaussian components and a linear background Requires: a .csv file with col 1 as wavelength and col 2 as intensity Written by: Ben Lear and Chris Johnson (authors@codechembook.com) v1.0.0 - 250207 - initial version &#39;&#39;&#39; import numpy as np from lmfit.models import LinearModel, GaussianModel from codechembook.quickPlots import plotFit import codechembook.symbols as cs from codechembook.quickTools import quickOpenFilename, quickPopupMessage # Ask the user for the filename containing the data to analyze quickPopupMessage(message = &#39;Select the file 0.001.csv&#39;) file = quickOpenFilename(filetypes = &#39;CSV Files, *.csv&#39;) # Read the file and unpack into arrays wavelength, absorbance = np.genfromtxt(file, skip_header=1, unpack = True, delimiter=&#39;,&#39;) # Set the upper and lower wavelength limits of the region of the spectrum to analyze lowerlim, upperlim = 450, 750 # Slice data to only include the region of interest trimmed_wavelength = wavelength[(wavelength &gt;= lowerlim) &amp; (wavelength &lt; upperlim)] trimmed_absorbance = absorbance[(wavelength &gt;= lowerlim) &amp; (wavelength &lt; upperlim)] # Construct a composite model and include initial guesses final_mod = LinearModel(prefix=&#39;lin_&#39;) # start with a linear model and add more later pars = final_mod.guess(trimmed_absorbance, x=trimmed_wavelength) # get guesses for linear coefficients c_guesses = [532, 580, 625] # initial guesses for centers s_guess = 10 # initial guess for widths a_guess = 20 # initial guess for amplitudes for i, c in enumerate(c_guesses): # loop through each peak to add corresponding gaussian component gauss = GaussianModel(prefix=f&#39;g{i+1}_&#39;) # create temporary gaussiam model pars.update(gauss.make_params(center=dict(value=c), # set initial guesses for parameters amplitude=dict(value=a_guess, min = 0), sigma=dict(value=s_guess, min = 0, max = 25))) final_mod = final_mod + gauss # add each peak to the overall model # Fit the model to the data and store the results result = final_mod.fit(trimmed_absorbance, pars, x=trimmed_wavelength) # Create a plot of the fit results but don&#39;t show it yet plot = plotFit(result, residual = True, components = True, xlabel = &#39;wavelength /nm&#39;, ylabel = &#39;intensity&#39;, output = None) # Add best fitting value for the center of each gaussian component as annotations for i in range(1, len(c_guesses)+1): # loop through components and add annotations with centers plot.add_annotation(text = f&#39;{result.params[f&#34;g{i}_center&#34;].value:.1f} {cs.math.plusminus} {result.params[f&#34;g{i}_center&#34;].stderr:.1f}&#39;, x = result.params[f&#39;g{i}_center&#39;].value, y = i*.04 + result.params[f&#39;g{i}_amplitude&#39;].value / (result.params[f&#39;g{i}_sigma&#39;].value * np.sqrt(2*np.pi)), showarrow = False) plot.show(&#39;png&#39;) # show the final plot Solutions to Exercises Targeted exercises Getting yes/no answers to questions posed by comparison statements Exercise 0 What do the following conditional expressions evaluate to (True or False)? First, write down what you think it should be before running the code, then run it to see if you are right. If you were wrong, explain what you did wrong. Suppose the following variables are already defined: trial_number = 4; pH = 2.53; acid = &#39;HCl&#39;; conc_stock = 1.0 # M</description>
    </item>
    <item>
      <title>Chapter 7</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-7/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-7/index.html</guid>
      <description>Data Download Data for Chapter 7&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter &#39;&#39;&#39; A function that returns the pH response as equivalents of acid are added to a solution Written by: Ben Lear and Chris Johnson (authors@codechembook.com) v1.0.0 - 250210 - initial version &#39;&#39;&#39; import numpy as np from codechembook.quickPlots import quickScatter def pH_response (eqs, pKa = None, base_i = None): &#39;&#39;&#39; Calculates pH values based on equivalents of acid added, pKa, and starting concentration of base. REQUIRED PARAMETERS: qs (ndarray): equivalents of acid added pKa (float): the pKa of the base base_i (float): initialtial concentration of the base RETURNS: (ndarray of floats): the pHs at each point &#39;&#39;&#39; # Calculate the terms for the quadratic equation a = 1 b = -1 * (base_i + eqs*base_i + 10**(-pKa)) # eqs*base_i gives concentration of acid c = base_i*(eqs * base_i) # Calcalute the two roots of the equation x1 = (-1*b + np.sqrt(b**2 - 4*a*c))/(2*a) x2 = (-1*b - np.sqrt(b**2 - 4*a*c))/(2*a) # Make a list of the correct x-values x = [] # this is an empty list that will eventually hold the correct values for eq, e1, e2 in zip(eqs, x1, x2): # test each value if e1 &lt; 0: # can&#39;t have negative number of molecules x.append(e2) elif base_i &lt; e1 or eq*base_i &lt; e1: # can&#39;t be greater than qty. added x.append(e2) elif isinstance(e1, complex): # must be a real number x.append(e2) else: # if none of those, then it is the correct value x.append(e1) x = np.array(x) # convert our list to a numpy array return pKa + np.log10((base_i - x)/(x)) # return an array, converted to pH # Test that the function is working, by plotting an example if __name__ == &#39;__main__&#39;: # this only runs if the file is run directly # Make up some test numbers eqs = np.linspace(0.05, 0.95, 20) # Run the function and plot the result quickScatter(x = eqs, y = pH_response(eqs, pKa = 7, base_i = 1), mode = &#39;lines+markers&#39;, xlabel = &#39;equivalents added&#39;, ylabel = &#39;pH&#39;) &#39;&#39;&#39; A program to fit a titration to the Hendersson-Hassebalch equations Requires: pH_response from titration.py .csv files with col 1 as wavelength and col 2 as intensity Written by: Ben Lear and Chris Johnson (authors@codechembook.com) v1.0.0 - 250214 - initial version &#39;&#39;&#39; import numpy as np from pathlib import Path from lmfit import Model from codechembook.quickPlots import plotFit import codechembook.symbols as cs from codechembook.quickTools import quickOpenFilename, quickPopupMessage import os # Ask the user to specify a file with the data and read it quickPopupMessage(&#39;Select the file with the titration data to fit.&#39;) exp_eqs, exp_pHs = np.genfromtxt(quickOpenFilename(), delimiter = &#39;,&#39;, skip_header = 1, unpack = True) # Import the function we will want to use as our model try: # first, we attempt to import the pH_response function from TitrationModel import pH_response # import the function we want direct access to except ModuleNotFoundError: # if import fails, ask user to find the python script quickPopupMessage(&#39;titration.py not found. Please locate it using the file dialog. Click OK to open the file dialog.&#39;) script_path = quickOpenFilename(filetypes = &#39;*.py&#39;) # locate the .py file you want to use original_path = Path(&#39;.&#39;).resolve() # record the path you are currently using os.chdir(script_path.parent) # change directory to the one holding the .py file from TitrationModel import pH_response # import the .py file os.chdir(original_path) # change back to the working directory you started in # Define a new lmfit model using the pH_response function pH_model = Model(pH_response, independent_vars=[&#39;eqs&#39;]) # set up the model with eqs as the &#39;x&#39; axis # Set up the fit parameter and non-adjustable parameter for initial amount of acid pH_params = pH_model.make_params() # make a parameter object pH_params.add(&#39;pKa&#39;, value = np.mean(exp_pHs)) # specifications for the parameter base_i = 0.05 # the initial amount of acid # Fit the model to the data and store the results pH_fit = pH_model.fit(data = exp_pHs, eqs = exp_eqs, params = pH_params, base_i = base_i) # Create a figure for the fit result but don&#39;t show it yet fig = plotFit(pH_fit, residual = True, xlabel = &#39;equivalents added&#39;, ylabel = &#39;pH&#39;, output = None) # Add a horzontal line to highlight the pKa as determined by the fit fig.add_scatter(x = [min(exp_eqs), max(exp_eqs)], y = [pH_fit.params[&#39;pKa&#39;].value, pH_fit.params[&#39;pKa&#39;].value], mode = &#39;lines&#39;, showlegend=False, line = dict(color = &#39;gray&#39;)) # Add an annotation containing the best fitting pKa and its uncertainty fig.add_annotation(x = max(exp_eqs), y = pH_fit.params[&#39;pKa&#39;].value, xanchor = &#39;right&#39;, yanchor = &#39;bottom&#39;, text = f&#39;pKa = {pH_fit.params[&#34;pKa&#34;].value:.3f} {cs.math.plusminus} {pH_fit.params[&#34;pKa&#34;].stderr:.3f}&#39;, showarrow = False) fig.show(&#39;png&#39;) Solutions to Exercises Targeted exercises Automatically running different code under different conditions using if-then-else statements Exercise 0 Write three different functions that take as an argument a pH value and print a statement saying whether the value is ‘strongly acidic’, ‘acidic’, ’neutral’, ‘basic’, or ‘strongly basic’. One version can only use if statements, the second must use nested if-else statements, and the third must use if-elif-else. Prove that they work correctly by testing the values 1.0, 4.0, 7.0, 10.0, and 13.0.</description>
    </item>
    <item>
      <title>Chapter 8</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-8/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-8/index.html</guid>
      <description>Data There is no data required for Chapter 8.&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter &#39;&#39;&#39; A code to numerically solve a reaction mechanism that is irreversible but has two intermediates using solve_ivp from scipy.integrate Written by: Ben Lear and Chris Johnson (authors@codechembook.com) v1.0.0 - 250217 - initial version &#39;&#39;&#39; import scipy.integrate as spi from codechembook.quickPlots import quickScatter # First we need to define a function containing our differential rate laws. def TwoIntermediatesDE(t, y, k): &#39;&#39;&#39; Differential rate laws for a catalytic reaction with two intermediates: A + cat -&gt;(k1) X X + B -&gt;(k2) Y Y -&gt;(k3) C + cat REQUIRED PARAMETERS: t (float): the cutrent time in the simulation. Not explicitly used but needed by solve_ivp y (list of float): the current concentration k (list of float): the rate RETURNS: (list of float): rate of change of concentration in the order of y &#39;&#39;&#39; A, B, cat, X, Y, C = y # unpack concentrations to convenient variables k1, k2, k3 = k # unpack rates dAdt = -k1 * A * cat dBdt = -k2 * X * B dcatdt = -k1 * A * cat + k3 * Y dXdt = k1 * A * cat - k2 * X * B dYdt = k2 * X * B - k3 * Y dCdt = k3 * Y return [dAdt, dBdt, dcatdt, dXdt, dYdt, dCdt] # Set up initial conditions and simulation parameters y0 = [1.0, 1.0, 0.2, 0.0, 0.0, 0.0] # concetrations (mM) [A, B, cat, X, Y, C] k = [5e1, 1e1, 5e0] # rate (1/s) time = [0, 10] # simulation start and end times (s) # Invoke solve_ivp and store the result object solution = spi.solve_ivp(TwoIntermediatesDE, time, y0, args = [k]) # Plot the results quickScatter(x = solution.t, # need a list of six identical time axes y = solution.y, name = [&#39;[A]&#39;, &#39;[B]&#39;, &#39;[cat]&#39;, &#39;[X]&#39;, &#39;[Y]&#39;, &#39;[C]&#39;], xlabel = &#39;Time (s)&#39;, ylabel = &#39;Concentration (mM)&#39;, output = &#39;png&#39;) &#39;&#39;&#39; A code to numerically solve a first order kinetics problem using solve_ivp from scipy.integrate Written by: Ben Lear and Chris Johnson (authors@codechembook.com) v1.0.0 - 250217 - initial version &#39;&#39;&#39; import numpy as np import scipy.integrate as spi from plotly.subplots import make_subplots # First we need to define a function containing our differential rate law. def FirstOrderDE(t, y, k): &#39;&#39;&#39; Differential rate law for first-order kinetics REQUIRED PARAMETERS: t (float): the cutrent time in the simulation. Not explicitly used but needed by solve_ivp y (float): the current concentration k (float): the rate Returns: (float): the rate of change of concentration with time &#39;&#39;&#39; dydt = -k * y return dydt # Set up initial conditions and simulation parameters y0 = 1.0 # concetration (mM) k = 1.0 # rate (1/s) time = [0, 10] # simulation start and end times (s) # Invoke solve_ivp and store the result object as solution solution = spi.solve_ivp(FirstOrderDE, time, [y0], args = [k]) # Compute the known analytical solution at each simulation time point anal_y = np.exp(-1 * k * solution.t) # Plot the numerical and analytical solutions and the difference between them fig = make_subplots(2, 1) fig.add_scatter(x = solution.t, y = solution.y[0], mode = &#39;markers&#39;, name = &#39;Numerical&#39;) # The data from the numerical solution fig.add_scatter(x = solution.t, y = np.exp(-1 * k * solution.t), mode = &#39;lines&#39;, name = &#39;Exact&#39;) # The corresponding analytical solution fig.add_scatter(x = solution.t, y = 100 * (solution.y[0] - anal_y) / anal_y, name = &#39;Error&#39;, row = 2, col = 1) # The percent difference between them # Update the plot appearance fig.update_xaxes(title = &#39;Time (s)&#39;) fig.update_yaxes(title = &#39;Concentration (mM)&#39;, row = 1) fig.update_yaxes(title = &#39;Percent Difference&#39;, row = 2) fig.update_layout(template = &#39;simple_white&#39;) fig.show(&#39;png&#39;) Solutions to Exercises Targeted exercises Simulating the kinetics of one elementary reaction using scipy.solve ivp Exercise 0 Working from the code presented in Section Simulating_the_kinetics_of_one_elementary_reaction_using_scipy.solve_ivp, write a script that plots the maximum error in the simulation versus the relative tolerance specification within scipy.solve_ivp(). You can use the rtol keyword to set this. Use a range of rtol values that span $10^{-1}$ to $10^{-9}$ changing by factors of 10.</description>
    </item>
    <item>
      <title>Chapter 9</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-9/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-9/index.html</guid>
      <description>Data Download Data for Chapter 9&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter &#39;&#39;&#39; A quick test script to check the accuracy of numerical integration Written by: Chris Johnson and Ben Lear (authors@codechembook.com) v1.0.0 - 250220 - initial version &#39;&#39;&#39; import numpy as np import scipy.integrate as spi # Gaussian parameters for the simulation A = 1 # amplitude x0 = 0 # center sigma = 1 # width # Create the test data and calculate the analytical value of integral xdata = np.linspace(-10, 10, 11) ydata = A * np.exp(-1 * (xdata - x0)**2 / (2 * sigma**2)) area = A * sigma * np.sqrt(2 * np.pi) # Compute the area using the rectangle rule, the trapezoid rule, or Simpson&#39;s rule area_rect = np.sum(ydata[:-1] * (xdata[1:] - xdata[:-1])) # height * width area_trap = spi.trapezoid(ydata, x = xdata) area_simp = spi.simpson(ydata, x = xdata) # Print the error print(f&#39;Area = {area:8.6f}&#39;) print( &#39;Errors: Absolute --&gt; % Diff&#39;) print(f&#39;Rectangle: {area - area_rect:8.6f} --&gt; {100*(area - area_rect)/area:4.1f}%&#39;) print(f&#39;Trapezoid: {area - area_trap:8.6f} --&gt; {100*(area - area_trap)/area:4.1f}%&#39;) print(f&#34;Simpson&#39;s: {area - area_simp:8.6f} --&gt; {100*(area - area_simp)/area:4.1f}%&#34;) &#39;&#39;&#39; Integrate the intensities of azide peaks in an FTIR and produce a kinetics plot Requires: FTIR spectra in .csv format for each time point filename must include the time the spectrum was taken Written by: Chris Johnson and Ben Lear (authors@codechembook.com) v1.0.0 - 250220 - initial version &#39;&#39;&#39; import numpy as np from plotly.subplots import make_subplots from codechembook.quickTools import quickOpenFilenames, quickPopupMessage from codechembook.quickPlots import customColorList from codechembook.symbols.chem import wavenumber as wn from codechembook.numericalTools import integrateRange # Get and sort the spectrum file names quickPopupMessage(message = &#39;Select CSV files containing the FTIR spectra.&#39;) data_files = sorted(quickOpenFilenames(filetypes = &#39;CSV files, *.csv&#39;)) # Set the upper and lower wavenumbers for the integration integration_limits = np.array([1950, 2150]) # Make a color scale for the different traces in the figure colors = customColorList(len(data_files)) # Get samples of a continuously changing color scale # Loop through files, read data and times, put them in a dict, and add to the plot data = [] fig = make_subplots(rows = 1, cols = 3) for c, file in zip(colors, data_files): wavenumber, absorption = np.genfromtxt(file, delimiter = &#39;,&#39;, unpack = True) time = float(file.stem.split(&#39;_&#39;)[1]) # get the time from the file name # Create a dictionary with everything we need for this file, add it to the list data.append(dict(wavenumber = wavenumber, absorption = absorption, time = time)) # Add the trace to the plot fig.add_scatter(x = wavenumber, y = absorption, line = dict(color = c), name = time, row = 1, col = 1) # Loop through the spectra to background subtract and integrate for c, spec in zip(colors, data): # Find the indicies of the points at the limits of integration xmin_index = np.argmin(abs(spec[&#39;wavenumber&#39;] - integration_limits[0])) xmax_index = np.argmin(abs(spec[&#39;wavenumber&#39;] - integration_limits[1])) # Calculate line that connects the x and y values at the limits of integration m = (spec[&#39;absorption&#39;][xmax_index] - spec[&#39;absorption&#39;][xmin_index]) / (spec[&#39;wavenumber&#39;][xmax_index] - spec[&#39;wavenumber&#39;][xmin_index]) b = spec[&#39;absorption&#39;][xmin_index] - m * spec[&#39;wavenumber&#39;][xmin_index] # Subtract the baseline from the data and add it to the figure spec[&#39;back corr&#39;] = spec[&#39;absorption&#39;] - (m * spec[&#39;wavenumber&#39;] + b) fig.add_scatter(x = spec[&#39;wavenumber&#39;], y = spec[&#39;back corr&#39;], line = dict(color = c), showlegend = False, row = 1, col = 2) # Integrate the baseline-corrected data and store it in the dict spec[&#39;integral&#39;] = integrateRange(spec[&#39;back corr&#39;], spec[&#39;wavenumber&#39;], integration_limits) # Calculate the maximum y value for this spectrum section spec[&#39;ymax&#39;] = np.max(spec[&#39;back corr&#39;][xmin_index:xmax_index]) # Find the max y height to determine the zoom range for the y-axis ymax = max([spec[&#39;ymax&#39;] for spec in data]) # Plot the kinetic trace tdata = np.array([spec[&#39;time&#39;] for spec in data]) # get array of times intdata = np.array([spec[&#39;integral&#39;] for spec in data]) # get array of integrals fig.add_scatter(x = tdata, y = intdata, name = &#39;kinetic trace&#39;, line = dict(color = &#39;black&#39;), row = 1, col = 3) # Format the plot fig.update_xaxes(title = f&#39;wavenumber / {wn}&#39;, row = 1, col = 1) fig.update_yaxes(title = &#39;absorption&#39;, row = 1, col = 1) fig.update_xaxes(title = f&#39;wavenumber / {wn}&#39;, range = integration_limits, row = 1, col = 2) fig.update_yaxes(title = &#39;absorption&#39;, range = [-0.1 * ymax, 1.1 * ymax], row = 1, col = 2) fig.update_xaxes(title = &#39;time /s&#39;, row = 1, col = 3) fig.update_yaxes(title = f&#39;intensity / AU{wn}&#39;, row = 1, col = 3) fig.update_layout(template = &#39;simple_white&#39;) fig.show(&#39;browser&#39;) # format for the static plot fig.update_layout(template = &#39;simple_white&#39;, showlegend = False, width = 4 * 300, height = 1.2 * 300, margin = {&#39;b&#39;: 10, &#39;t&#39;: 30, &#39;l&#39;: 10, &#39;r&#39;: 10}) fig.show(&#39;png&#39;) fig.write_image(format = &#39;png&#39;, file = &#39;IntegrateAzide.png&#39;) Solutions to Exercises Targeted exercises Integrating data using scipy.integrate Exercise 0 The cumulative integral is the integral from the beginning of the data up to a given data point, plotted against the value of the last data point integrated. Write a code that plots the cumulative integral of a Gaussian function with an amplitude of 1, a center of 5, and a width of 1, from 0 to 10 with 100 points on the $x$-axis.</description>
    </item>
    <item>
      <title>Chapter 10</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-10/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-10/index.html</guid>
      <description>Data Chapter 10 uses the same data as for Chapter 9&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter &#39;&#39;&#39; Find the baseline of a small peak by smoothing the data first Requires: An FTIR spectrum in .csv format Written by: Chris Johnson and Ben Lear (authors@codechembook.com) v1.0.0 - 250226 - initial version &#39;&#39;&#39; import numpy as np import scipy.signal as sps from codechembook.quickPlots import quickScatter from codechembook.quickTools import quickOpenFilename, quickPopupMessage from codechembook.symbols.chem import wavenumber as wn # Get the spectrum file names quickPopupMessage(message = &#39;Select a CSV file containing the FTIR spectrum.&#39;) data_file = quickOpenFilename(filetypes = &#39;CSV files, *.csv&#39;) # Read the file wavenumber, absorption = np.genfromtxt(data_file, delimiter = &#39;,&#39;, unpack = True) # Apply Savitzky-Golay smoothing with a window of 100 and an order of 3 smooth_absorption = sps.savgol_filter(absorption, 100, 3) # Get the baseline connecting the limits of integration base_min, base_max = 1790, 1870 # the limits abs_min = smooth_absorption[np.argmin(np.abs(wavenumber - base_min))] abs_max = smooth_absorption[np.argmin(np.abs(wavenumber - base_max))] m = (abs_max - abs_min) / (base_max - base_min) b = abs_min - m * base_min baseline = m * wavenumber + b # Plot all of the spectra to allow the limits of integration to be determined fig = quickScatter(x = wavenumber, y = [absorption, smooth_absorption, baseline], name = [&#39;raw data&#39;, &#39;smoothed data&#39;, &#39;baseline&#39;], output = None) # Modify each trace to have the appearance we want fig.update_traces(selector = 0) fig.update_traces(selector = 1, line = dict(width = 4)) fig.update_traces(selector = 2, line = dict(width = 4, dash = &#39;dash&#39;)) # Find the y axis range to show just the data we want to see in the plot int_range = (wavenumber &gt; 1750) &amp; (wavenumber &lt; 1900) ymin, ymax = np.min(absorption[int_range]), np.max(absorption[int_range]) # Format the plot area fig.update_yaxes(title = &#39;absorption&#39;, range = [ymin - 0.1*(ymax-ymin), 1.1 * ymax]) fig.update_xaxes(title = f&#39;wavenumber /{wn}&#39;, range = [1750, 1900]) fig.update_layout(template = &#39;simple_white&#39;) fig.show(&#39;png&#39;) fig.write_image(format = &#39;png&#39;, file = &#39;SmoothIR.png&#39;) Solutions to Exercises Targeted exercises Comparing smoothing approaches Exercise 0 Imagine you have data with uncertainties for the dependent variables. You are considering smoothing using either binning or moving averages. The bin width and the moving average window would be the same size. After smoothing, which approach results in the largest absolute uncertainty? Which approach results in the largest relative uncertainties? You can, of course, use code to answer this question, if you want.</description>
    </item>
    <item>
      <title>Chapter 11</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-11/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-11/index.html</guid>
      <description>Data Download Data for Chapter 11&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter &#39;&#39;&#39; A quick test of the accuracty of linear and cubic spline interpolation Written by: myself (email@domain.com) Changelog: 240606 - v1.0.0 - initial version &#39;&#39;&#39; import numpy as np import scipy.interpolate as spi # Define a function that returns the y-values for a Gaussian function def Gaussian(x, A, x0, sigma): return A * np.exp(-1 * (x - x0)**2 / (2 * sigma**2)) # Define the parameters of the Gaussian A = 1 x0 = 0 sigma = 2 # Create the x- and y-values for the fake data x_data = np.linspace(-5, 5, 8) y_data = Gaussian(x_data, A, x0, sigma) # Create the x- and (exact) y-values for the points at which we would like interpolation x_interp = np.linspace(-5, 5, 1000) y_interp_exact = Gaussian(x_interp, A, x0, sigma) # Do the interpolation. First we will do a linear interpolation y_interp_linear = np.interp(x_interp, x_data, y_data) # Do the cubic spline interpolation - first create the cubic spline object cs = spi.CubicSpline(x_data, y_data) # Now generate the array of interpolated points using the cubic spline object y_interp_cubic = cs(x_interp) # Compute and print the RMSE print(f&#34;Linear Interpolation RMSE: {np.sqrt(np.sum((y_interp_linear - y_interp_exact)**2)):4.2f}&#34;) print(f&#34; Cubic Interpolation RMSE: {np.sqrt(np.sum((y_interp_cubic - y_interp_exact)**2)):4.2f}&#34;) &#39;&#39;&#39; Produce an interpolated 785 nm raman spectrum with data points that match an old 532 nm spectrum Requires: one 532 nm and one 785 nm Raman spectrum Written by: Chris Johnson and Ben Lear (authors@codechembook.com) v1.0.0 - 250304 - initial version &#39;&#39;&#39; import numpy as np from plotly.subplots import make_subplots from codechembook.quickTools import quickSelectFolder, quickPopupMessage from codechembook.symbols.chem import wavenumber as wn # Scaling factor for 532 data scale_532 = 2.95 # Get the folder containing the files to process quickPopupMessage(message = &#39;Select the folder with the Raman spectra.&#39;) folder_name = quickSelectFolder() # Read the data: 785 is the new data that has a contaminant, 532 is the old data x532, y532 = np.genfromtxt(folder_name/&#39;oldNPs.csv&#39;, delimiter = &#39;,&#39;, skip_header = 2, unpack = True) x785, y785 = np.genfromtxt(folder_name/&#39;newNPs.csv&#39;, delimiter = &#39;,&#39;, skip_header = 2, unpack = True) # Interpolate to the 532 spectrum because 785 has the larger span of x points y785_interp = np.interp(x532, x785, y785) # Normalize the 532 data and subtract it from the 785 data y_delta = y785_interp - scale_532 * y532 # Plot the 785 spectrum, the 532 spectrum, and the difference spectrum fig = make_subplots(2, 1) fig.add_scatter(x = x532, y = y785_interp, name = &#39;785 nm&#39;, row = 1, col = 1) fig.add_scatter(x = x532, y = scale_532 * y532, name = &#39;532 nm&#39;, row=1, col=1) fig.add_scatter(x = x532, y = y_delta, name = &#39;Subtracted&#39;, showlegend = False, row = 2, col = 1) fig.update_xaxes(title = f&#39;wavenumber /{wn}&#39;) fig.update_yaxes(title = &#39;intensity&#39;) fig.update_layout(template = &#39;simple_white&#39;, font_size = 18, width = 3 * 300, height = 3 * 300, margin = dict(b = 10, t = 30, l = 10, r = 10)) fig.show(&#39;png&#39;) fig.write_image(&#39;raman.png&#39;) Solutions to Exercises Targeted exercises Implementing linear interpolation using numpy.interp Exercise 1 Consider a Gaussian distribution with $x_0 = 0$, $\sigma = 2$, and amplitude of 1.</description>
    </item>
    <item>
      <title>Chapter 12</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-12/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-12/index.html</guid>
      <description>Data Download Data for Chapter 12&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter &#39;&#39;&#39; Find the peaks of the reduction and oxidation waves in a CV experiment and calculate E_1/2 Requires: CV data in a .csv file with reduction first then oxidation Written by: Chris Johnson and Ben Lear (authors@codechembook.com) v1.0.0 - 250322 - initial version &#39;&#39;&#39; import numpy as np import scipy.signal as sps from plotly.subplots import make_subplots from codechembook.quickTools import quickOpenFilename, quickPopupMessage from codechembook.symbols.greek import Delta from codechembook.symbols.typesettingHTML import textsub, textit # Find the data file quickPopupMessage(message = &#39;Select a file with CV data.&#39;) filename = quickOpenFilename(filetypes = &#39;CSV files, *.csv&#39;) # Read the data - reduction and oxidation waves are included here E, I = np.genfromtxt(filename, delimiter = &#39;,&#39;, skip_header = 1, unpack = True) # Find the index of the scan turnaround to separate the oxidation and reduction waves i = 1 # a counter variable stop = False # will change to true to stop the loop while i &lt; len(E) and stop == False: # loop until we reach the end or find the turnaround if E[i] &lt; E[i-1]: # as long as E is reducing, then we are on reduction i += 1 else: # E started to increase so this must be the turnaround stop = True # Get separate arrays for reduction and oxidation E_red, I_red = E[:i], I[:i] # reduction is simple if E[i] == E[i+1]: # check special case where two points are the same at the turnaround E_ox, I_ox = E[i:], I[i:] else: # only one point at the turnaround E_ox, I_ox = E[i-1:], I[i-1:] # we want to include the turnaround point too, thus i-1 # Find indicies of the peak currents. Use prominence of 10% of max to avoid noise E_pa_index, E_pa_dict = sps.find_peaks(-1*I_red, prominence = .1*np.max(-1*I_red)) E_pc_index, E_pc_dict = sps.find_peaks(I_ox, prominence = .1*np.max(I_ox)) # Find the potentials at the peak current E_pa = E_red[E_pa_index[0]] E_pc = E_ox[E_pc_index[-1]] # Print the results print(f&#39;E_pa = {E_red[E_pa_index[0]]:.3f} V&#39;) print(f&#39;E_pc = {E_ox[E_pc_index[0]]:.3f} V&#39;) print(f&#39;{Delta}E = {np.abs(E_pa - E_pc):.3f} V, E1/2 = {(E_pa + E_pc)*.5:.3f} V&#39;) # Plot the results fig = make_subplots() fig.add_scatter(x = E_red, y = I_red, name = f&#39;I{textsub(&#34;red&#34;)}&#39;, line = dict(color = &#39;red&#39;)) fig.add_scatter(x = E_ox, y = I_ox, name = f&#39;I{textsub(&#34;ox&#34;)}&#39;, line = dict(color = &#39;blue&#39;)) fig.add_scatter(x = E_red[E_pa_index], y = I_red[E_pa_index], name = f&#39;E{textsub(&#34;pa&#34;)}&#39;, mode = &#39;markers&#39;, marker = dict(color = &#39;red&#39;, size = 10)) fig.add_scatter(x = E_ox[E_pc_index], y = I_ox[E_pc_index], name = f&#39;E{textsub(&#34;pc&#34;)}&#39;, mode = &#39;markers&#39;, marker = dict(color = &#39;blue&#39;, size = 10)) # Format the plot and display fig.update_xaxes(title = f&#39;{textit(&#34;E&#34;)} /V&#39;, tickformat = &#39;0.1f&#39;) fig.update_yaxes(title = f&#39;{textit(&#34;i&#34;)} /A&#39;) fig.update_layout(template = &#39;simple_white&#39;, font_family = &#39;arial&#39;, font_size = 18, width = 3 * 300, height = 2 * 300, margin = dict(b = 10, t = 30, l = 10, r = 10)) fig.show(&#39;png&#39;) fig.write_image(&#39;CVpeaks.png&#39;) Solutions to Exercises Targeted exercises Finding local maxima using scipy.signal.find peaks Exercise 0 Go to the the book’s website (website) and get the following files: ‘Raman UIO66.csv, HDI IR.csv’, and ‘AuNP Raman.csv’</description>
    </item>
    <item>
      <title>Chapter 13</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-13/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-13/index.html</guid>
      <description>Data Chapter 13 uses the same data as for Chapter 12&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter &#39;&#39;&#39; Find the peaks of the reduction and oxidation waves in a CV experiment and calculate E_1/2 Requires: CV data in a .csv file with reduction first then oxidation Written by: Chris Johnson and Ben Lear (authors@codechembook.com) v1.0.0 - 250322 - initial version v1.1.0 - 250323 - refactored to make AnalyzeCV function importable &#39;&#39;&#39; import numpy as np import scipy.signal as sps from plotly.subplots import make_subplots from codechembook.quickTools import quickOpenFilename, quickPopupMessage from codechembook.symbols.greek import Delta from codechembook.symbols.typesettingHTML import textsub, textit def SplitWaves(E, I): &#39;&#39;&#39; Split E and I arrays into separate reduction and oxidaton E and I arrays REQUIRED PARAMETERS: E (ndarray): potential data points I (ndarray): current data points RETURNS: (ndarray): E_red, reduction wave potential data points (ndarray): I_red, reduction wave current data points (ndarray): E_ox, oxidation wave potential data points (ndarray): I_ox, oxidation wave current data points &#39;&#39;&#39; # Find the index of the scan turnaround to separate the oxidation and reduction waves i = 1 # a counter variable stop = False # will change to true to stop the loop while i &lt; len(E) and stop == False: # loop until we reach the end or find the turnaround if E[i] &lt; E[i-1]: # as long as E is reducing, then we are on reduction i += 1 else: # E started to increase so this must be the turnaround stop = True # Get separate arrays for reduction and oxidation E_red, I_red = E[:i], I[:i] # reduction is simple if E[i] == E[i+1]: # check special case where two points are the same at the turnaround E_ox, I_ox = E[i:], I[i:] else: # only one point at the turnaround E_ox, I_ox = E[i-1:], I[i-1:] # we want to include the turnaround point too, thus i-1 return E_red, I_red, E_ox, I_ox def AnalyzeCV(E_red, I_red, E_ox, I_ox): &#39;&#39;&#39; Find the peaks of reduction and oxidation waves and get E_1/2 REQUIRED PARAMS: E_red (ndarray): reduction wave E points I_red (ndarray): reduction wave I points E_ox (ndarray): oxidation wave E points I_ox (ndarray): oxidation wave I points RETURNS: (float): E_pa, the peak of the anodic wave (float): E_pc, the peak of the cathodic wave (float): delta_E, the difference between the peaks (float): E_1/2, the reduction potential (int): E_pa_index, the index of the peak of the anodic wave (int): E_pc_index, the index of the peak of the cathodic wave &#39;&#39;&#39; # Find peaks in reduction and oxidation regions E_pa_index, E_pa_dict = sps.find_peaks(-1*I_red, prominence=0.1*np.max(E_red)) E_pc_index, E_pc_dict = sps.find_peaks(I_ox, prominence=0.1*np.max(I_ox)) # Calculate E_pa and E_pc E_pa = E_red[E_pa_index[0]] E_pc = E_ox[E_pc_index[-1]] # Calculate delta E and E1/2 delta_E = np.abs(E_pa - E_pc) E1_2 = (E_pa + E_pc) * 0.5 return E_pa, E_pc, delta_E, E1_2, E_pa_index, E_pc_index if __name__ == &#39;__main__&#39;: # Find the data file quickPopupMessage(message = &#39;Select a file with CV data.&#39;) filename = quickOpenFilename(filetypes = &#39;CSV files, *.csv&#39;) # Read the data - reduction and oxidation waves are included here E, I = np.genfromtxt(filename, delimiter = &#39;,&#39;, skip_header = 1, unpack = True) # Get separate E and I arrays for oxidation and reduction E_red, I_red, E_ox, I_ox = SplitWaves(E, I) # Call the analysis function E_pa, E_pc, delta_E, E1_2, E_pa_index, E_pc_index = AnalyzeCV(E_red, I_red, E_ox, I_ox) # Print the results print(f&#39;E_pa = {E_red[E_pa_index[0]]:.3f} V&#39;) print(f&#39;E_pc = {E_ox[E_pc_index[0]]:.3f} V&#39;) print(f&#39;{Delta}E = {np.abs(E_pa - E_pc):.3f} V, E1/2 = {(E_pa + E_pc)*.5:.3f} V&#39;) # Plot the results fig = make_subplots() fig.add_scatter(x = E_red, y = I_red, name = f&#39;I{textsub(&#34;red&#34;)}&#39;, line = dict(color = &#39;red&#39;)) fig.add_scatter(x = E_ox, y = I_ox, name = f&#39;I{textsub(&#34;ox&#34;)}&#39;, line = dict(color = &#39;blue&#39;)) fig.add_scatter(x = E_red[E_pa_index], y = I_red[E_pa_index], name = f&#39;E{textsub(&#34;pa&#34;)}&#39;, mode = &#39;markers&#39;, marker = dict(color = &#39;red&#39;, size = 10)) fig.add_scatter(x = E_ox[E_pc_index], y = I_ox[E_pc_index], name = f&#39;E{textsub(&#34;pc&#34;)}&#39;, mode = &#39;markers&#39;, marker = dict(color = &#39;blue&#39;, size = 10)) # Format the plot and display fig.update_xaxes(title = f&#39;{textit(&#34;E&#34;)} /V&#39;, tickformat = &#39;0.1f&#39;) fig.update_yaxes(title = f&#39;{textit(&#34;i&#34;)} /A&#39;) fig.update_layout(template = &#39;simple_white&#39;, font_family = &#39;arial&#39;, font_size = 18, width = 3 * 300, height = 2 * 300, margin = dict(b = 10, t = 30, l = 10, r = 10)) fig.show(&#39;png&#39;) fig.write_image(&#39;CVpeaks.png&#39;) &#39;&#39;&#39; Testing numerical derivatives by taking the derivative of a cos function Written by: Chris Johnson and Ben Lear (authors@codechembook.com) v1.0.0 - 250427 - initial version &#39;&#39;&#39; import numpy as np from plotly.subplots import make_subplots # Sample a cos function and its analytical derivative, sin, over one period x = np.linspace(0, 2*np.pi, 20) y = np.cos(x) dy_analytical = -1 * np.sin(x) # Compute the numerical derivative using 1st and 2nd order edges dy_gradient_1 = np.gradient(y, x, edge_order = 1) dy_gradient_2 = np.gradient(y, x, edge_order = 2) # Plot all the curves to compare fig = make_subplots() fig.add_scatter(x = x, y = y, name = &#39;cos(x)&#39;, line = dict(color = &#39;black&#39;), ) fig.add_scatter(x = x, y = dy_gradient_1, name = &#39;Gradient, order 1&#39;, line = dict(color = &#39;gray&#39;, width = 12),) fig.add_scatter(x = x, y = dy_gradient_2, name = &#39;Gradient, order 2&#39;, line = dict(color = &#39;black&#39;, dash = &#39;solid&#39;, width = 8),) fig.add_scatter(x = x, y = dy_analytical, name = &#39;d(cos(x))/dx&#39;, line = dict(color = &#39;lightgrey&#39;, dash = &#39;dot&#39;, width = 4), ) fig.update_yaxes(title = &#39;y&#39;) fig.update_xaxes(title = &#39;x&#39;) fig.update_layout(template = &#39;simple_white&#39;, font_size = 18, legend = dict(x = 1, y = 0, xanchor = &#39;right&#39;), width = 3 * 300, height = 2 * 300, margin = dict(b = 10, t = 30, l = 10, r = 10)) fig.show(&#39;png&#39;) fig.write_image(format = &#39;png&#39;, file = &#39;CosDerivativeExample.png&#39;) &#39;&#39;&#39; Get E_1/2, i_pa, and i_pc for a CV experiment Requires: CVPeakFind_refactored.py, CV data in a .csv file with reduction first then oxidation Written by: Chris Johnson and Ben Lear (authors@codechembook.com) v1.0.0 - 250427 - initial version &#39;&#39;&#39; import numpy as np from plotly.subplots import make_subplots from lmfit.models import LinearModel from codechembook.quickTools import quickOpenFilename, importFromPy, quickPopupMessage from codechembook.symbols.greek import Delta from codechembook.symbols.typesettingHTML import textsub, textit importFromPy(&#39;CVPeakFind_refactored.py&#39;, &#39;SplitWaves&#39;, &#39;AnalyzeCV&#39;) # Get the capacative contribution to the current def getCapCharge(I, E, E_peak): &#39;&#39;&#39; Compute the capacative charging component of the i vs. E wave REQUIREDPARAMS: I (ndarray): the current data points E (ndarray): the potential data points E_peak (float): the potential of the peak in the wave RETURNS: (ndarray): cap, the linear extrapolated capacative current at each E (ndarray): dI, the derivative of the current (ndarray): d2I, the second derivative of the current (ndarray): std_d2I, the standard deviation over a subset of E (ndarray): std_valid, the points at which the std is evaluated &#39;&#39;&#39; # Take the first and second derivatives dI = np.gradient(I, E) d2I = np.gradient(dI, E) # Estimate the standard deviation of the second derivative of the current # at each point by calculating it in a moving window window = 5 std_d2I = np.array([np.std(d2I[i-window:i+window]) for i in np.arange(10, E_peak)]) # Find the indicies of points at which the standard deviation is less than twice the minimum std_valid = np.array([i for i in range(len(std_d2I)) if std_d2I[i] &lt; 2*np.min(std_d2I)]) # Find the first contiguous range of points with a low standard deviation # First we calculate the difference between the indicies calculated above # and the indicies we would expect if every point was contiguous std_valid_shift = std_valid - np.arange(np.min(std_valid), np.min(std_valid) + len(std_valid)) # Select only the indicies where the difference is zero i.e. the range was contiguous deriv_indicies = [val for val, test in zip(std_valid, std_valid_shift) if test == 0] # Fit that range to a linear model fit = LinearModel() fit_results = fit.fit(I[deriv_indicies], x = E[deriv_indicies]) # Compute the capacative charging compoenent cap = fit_results.best_values[&#39;slope&#39;] * E + fit_results.best_values[&#39;intercept&#39;] return cap # Find the data file quickPopupMessage(message = &#39;Select a file with CV data.&#39;) filename = quickOpenFilename(filetypes = &#39;CSV files, *.csv&#39;) # Read the data - reduction and oxidation waves are included here E, I = np.genfromtxt(filename, delimiter = &#39;,&#39;, skip_header = 1, unpack = True) # Get separate E and I arrays for oxidation and reduction E_red, I_red, E_ox, I_ox = SplitWaves(E, I) # call the analysis function E_pa, E_pc, delta_E, E1_2, E_pa_index, E_pc_index = AnalyzeCV(E_red, I_red, E_ox, I_ox) # Get the capacative currents for both waves cap_red = getCapCharge(I_red, E_red, E_pa_index[0]) cap_ox = getCapCharge(I_ox, E_ox, E_pc_index[0]) # Print the results print(f&#39;Epa = {E_red[E_pa_index[0]]:.3f} V, ipa {I_red[E_pa_index[0]] - cap_red[E_pa_index[0]]:9.3e} A&#39;) print(f&#39;Epc = {E_ox[E_pc_index[0]]:.3f} V, ipc {I_ox[E_pc_index[0]] - cap_ox[E_pc_index[0]]:9.3e} A&#39;) print(f&#39;{Delta}E = {np.abs(E_pa - E_pc):.3f} V, E1/2 = {(E_pa + E_pc)*.5:.3f} V&#39;) # Plot the results fig = make_subplots() fig.add_scatter(x = E_red, y = I_red, name = f&#39;I{textsub(&#34;red&#34;)}&#39;, line = dict(color = &#39;red&#39;)) fig.add_scatter(x = E_ox, y = I_ox, name = f&#39;I{textsub(&#34;ox&#34;)}&#39;, line = dict(color = &#39;blue&#39;)) fig.add_scatter(x = [E_red[E_pa_index]], y = [I_red[E_pa_index]], name = f&#39;E{textsub(&#34;pa&#34;)}&#39;, mode = &#39;markers&#39;, marker = dict(color = &#39;red&#39;, size = 10)) fig.add_scatter(x = [E_ox[E_pc_index]], y = [I_ox[E_pc_index]], name = f&#39;E{textsub(&#34;pc&#34;)}&#39;, mode = &#39;markers&#39;, marker = dict(color = &#39;blue&#39;, size = 10)) fig.add_scatter(x = E_red, y = cap_red, showlegend = False, line = dict(color = &#39;red&#39;, dash = &#39;dash&#39;)) fig.add_scatter(x = E_ox, y = cap_ox, showlegend = False, line = dict(color = &#39;blue&#39;, dash = &#39;dash&#39;)) # Format the plot and display fig.update_xaxes(title = f&#39;{textit(&#34;E&#34;)} /V&#39;, tickformat = &#39;0.1f&#39;) fig.update_yaxes(title = f&#39;{textit(&#34;i&#34;)} /A&#39;) fig.update_layout(template = &#39;simple_white&#39;, font_family = &#39;arial&#39;, font_size = 18, width = 3 * 300, height = 2 * 300, margin = dict(b = 10, t = 30, l = 10, r = 10)) fig.show(&#39;png&#39;) fig.write_image(&#39;CVpeaks.png&#39;) Solutions to Exercises Targeted exercises Refactoring code to turn it into functions you can reuse Exercise 0 Refactor the final code from Chapter 1. Create separate functions to handle the printing output, the calcPlateVols() functionality, and defining the geometry and concentrations of the well plate. The latter should take as arguments: the number of rows, the number of columns and the starting and ending concentrations and ionic strengths.</description>
    </item>
    <item>
      <title>Chapter 14</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-14/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-14/index.html</guid>
      <description>Data Chapter 14 usesthe same data as for Chapter 3.&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter &#39;&#39;&#39; A script to produce .csv files with data from my titration paper Requires: DecomposeSpectrum.py Written by: Ben Lear and Chris Johnson (authors@codechembook.com) v1.0.0 - 250502 - initial version &#39;&#39;&#39; from codechembook.quickTools import importFromPy, quickSaveFilename, quickPopupMessage importFromPy(&#39;DecomposeSpectrum.py&#39;, &#39;trimmed_wavelength&#39;, &#39;trimmed_absorbance&#39;, &#39;result&#39;) # Gather the fit parameters and spectral data into a dictionary spectral_data = dict(wavelength = trimmed_wavelength, absorption = trimmed_absorbance) # Load the individual components of the fit into the dictionary comps = result.eval_components(x = trimmed_wavelength) for c in comps: spectral_data[f&#39;{c[:-1]}&#39;] = comps[c] # Get the path for the file that we want to save quickPopupMessage(&#39;Choose a CSV file to save the data.&#39;) file = quickSaveFilename(filetypes=&#39;CSV files, *.csv&#39;, initialpath=&#39;./report.csv&#39;, title=&#39;Save your file&#39;) # Write the spectral data and fit parameters to a CSV file with open(file, &#39;w&#39;, encoding=&#39;utf-8-sig&#39;) as f: # open a file to write, in write mode # Write the header line, starting with the data line = &#39;wavelength, exp, fit, gaussian1, gaussian2, gaussian3, &#39; # Now add the headers for the columns with gaussian fits for j in range(3): line = line + f&#39;g{j+1} amplitude, g{j+1} amplitude uncertainty, g{j+1} center, g{j+1} center uncertainty, g{j+1} sigma, g{j+1} sigma uncertainty, &#39; # Now add the linear part line = line + &#39;slope, slope uncertainty, intercept, intercept uncertainty\n&#39; f.write(line) # Construct the complex CSV file. We need to write out N_data_data_points rows # Some rows at the top will have extra columns for the fit data for i in range(len(trimmed_wavelength)): # loop through the rows we need line = &#39;&#39; # start with a blank line on each iteration for data in spectral_data: # iterate through keys in the spectral data dictionary line = line + f&#39;{spectral_data[data][i]:E}, &#39; # add data values to line in engineering format # Treat rows 1-3 differently to print the best fitting parameters for the gaussian components if i == 0: # we can add in parameter value information for j in range(3): line = line + f&#39;{result.params[f&#34;g{j+1}_amplitude&#34;].value}, &#39; line = line + f&#39;{result.params[f&#34;g{j+1}_amplitude&#34;].stderr}, &#39; line = line + f&#39;{result.params[f&#34;g{j+1}_center&#34;].value}, &#39; line = line + f&#39;{result.params[f&#34;g{j+1}_center&#34;].stderr}, &#39; line = line + f&#39;{result.params[f&#34;g{j+1}_sigma&#34;].value}, &#39; line = line + f&#39;{result.params[f&#34;g{j+1}_sigma&#34;].stderr}, &#39; line = line + f&#39;{result.params[&#34;lin_slope&#34;].value:E}, &#39; line = line + f&#39;{result.params[&#34;lin_slope&#34;].stderr:E},&#39; line = line + f&#39;{result.params[&#34;lin_intercept&#34;].value:E}, &#39; line = line + f&#39;{result.params[&#34;lin_intercept&#34;].stderr:E}&#39; line = line + &#39;\n&#39; # add a newline character to the end of our line f.write(line) # write the line &#39;&#39;&#39; Produce a Word document with a table for the titration paper Requires: DecomposeSpectrum.py Written by: Ben Lear and Chris Johnson (authors@codechembook.com) v1.0.0 - 250502 - initial version &#39;&#39;&#39; from docx import Document from docx.oxml import OxmlElement, parse_xml from docx.oxml.ns import qn, nsdecls from docx.enum.text import WD_PARAGRAPH_ALIGNMENT from codechembook.quickTools import importFromPy, quickSaveFilename, quickPopupMessage importFromPy(&#39;DecomposeSpectrum.py&#39;, &#39;result&#39;) # Start a new Word document doc = Document() # Add a section title - we have to handle the subscript as a separate paragraph title = doc.add_heading(level=1) # Use the preset heading font title_run = title.add_run(&#39;Composition of the Ni(pz)&#39;) # Text for the start of the heading; pz = pyrazine subscript_run = title.add_run(&#39;2&#39;) # Text that is supposed to be subscripted subscript_run.font.subscript = True # Change the text formatting to subscript title.add_run(&#39; MLCT band&#39;) # Change back to normal font and keep going # Add the introductory paragraph with subscript paragraph = doc.add_paragraph(&#39;Parameters extracted from the spectral decomposition of the MLCT band for Ni(pz)&#39;) paragraph_run = paragraph.add_run(&#39;2&#39;) paragraph_run.font.subscript = True paragraph.add_run(&#39; into Gaussians contributions. &#39;) # Create a table parameters = [&#39;amplitude&#39;, &#39;center&#39;, &#39;sigma&#39;] ncomponents = 3 # the number of gaussians ncols = len(parameters)*2 # each gaussian has three parameters, each with uncertainty table = doc.add_table(rows=ncomponents + 1, cols=ncols + 1) # create a blank table table.style = &#39;Table Grid&#39; # style the table # Add header row with the names of the fit parameters hdr_cells = table.rows[0].cells hdr_cells[0].text = &#39;Component&#39; for i, name in enumerate(parameters): hdr_cells[2*i+1].text = f&#39;{name}&#39; hdr_cells[2*i+2].text = f&#39;{name} uncertainty&#39; # Make text in header cells bold and shaded gray, and with a light gray backgound for cell in hdr_cells: # Get the instructions for Word to apply the text shading and giving it to the cell shading_elm = parse_xml(r&#39;&lt;w:shd {} w:fill=&#34;D9D9D9&#34;/&gt;&#39;.format(nsdecls(&#39;w&#39;))) # must create each time cell._tc.get_or_add_tcPr().append(shading_elm) # Loop through each run in each paragraph in the cell for paragraph in cell.paragraphs: for run in paragraph.runs: run.font.bold = True shading_elm = OxmlElement(&#39;w:shd&#39;) # create a shading element shading_elm.set(qn(&#39;w:fill&#39;), &#39;C0C0C0&#39;) # set the text color run._element.append(shading_elm) # add it back in # Loop through each gaussian componenet and add the parameter values to the table for i, model in enumerate([&#39;g1&#39;, &#39;g2&#39;, &#39;g3&#39;]): data_cells = table.rows[i+1].cells # get a collection of new cells data_cells[0].text = f&#39;{model}&#39; # set the row label # Loop through the parameters and print their values in the corresponding cell for j, parameter in enumerate(parameters): data_cells[2*j+1].text = f&#39;{result.params[f&#34;{model}_{parameter}&#34;].value:.2f}&#39; data_cells[2*j+2].text = f&#39;{result.params[f&#34;{model}_{parameter}&#34;].stderr:.2f}&#39; data_cells[2*j+1].paragraphs[0].alignment = WD_PARAGRAPH_ALIGNMENT.RIGHT data_cells[2*j+2].paragraphs[0].alignment = WD_PARAGRAPH_ALIGNMENT.RIGHT # Open a file dialog to pick the file to save to quickPopupMessage(message = &#39;Choose a filename for the Word document.&#39;) file = quickSaveFilename(title = &#39;Please choose a filename to save as.&#39;, filetypes = &#39;Word Documents, *.docx&#39;) # Save the document doc.save(file) Solutions to Exercises Targeted exercises Writing text files with arbitrary formatting using with open Exercise 0 Produce a narrative fit report, that explains the model, the fit approach, and the results for the linear fit produced by the final code in Chapter 5. You can append the following to the code for Chapter 5.</description>
    </item>
    <item>
      <title>Chapter 15</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-15/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-15/index.html</guid>
      <description>Data Download Data for Chapter 15&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter &#39;&#39;&#39; Fit data in an excel file and update the file with the results Requires: existing titration data excel file Written by: Ben Lear and Chris Johnson (authors@codechembook.com) v1.0.0 - 250508 - initial version &#39;&#39;&#39; import numpy as np from lmfit import Model from openpyxl import load_workbook from openpyxl.chart import Reference, Series from openpyxl.styles import PatternFill, Border, Side, Font from codechembook.quickTools import importFromPy, quickOpenFilename, quickSaveFilename, quickPopupMessage importFromPy(&#39;TitrationModel.py&#39;, &#39;pH_response&#39;) # Read the excel file and go to the right worksheet quickPopupMessage(message = &#39;Select the Excel workbook containing the data.&#39;) filename = quickOpenFilename(filetypes = &#39;Excel Workbooks, *.xlsx&#39;) wb = load_workbook(filename) # Load the workbook into Python ws = wb[&#39;data&#39;] # Select a worksheet # Get the data from the columns of the worksheet row by row exp_eqs, exp_pHs = [], [] for eq, pH in zip(ws[&#39;A&#39;], ws[&#39;B&#39;]): try: exp_eqs.append(float(eq.value)) exp_pHs.append(float(pH.value)) except: pass # Set up a new model with the pH_response function and &#39;eqs&#39; as the indpedent variable pH_model = Model(pH_response, independent_vars=[&#39;eqs&#39;]) # Set up the fit parameter with the average pH as the initial guess pH_params = pH_model.make_params() pH_params.add(&#39;pKa&#39;, value = np.mean(exp_pHs)) # specifications for the parameter # Fit the model to the data pH_fit = pH_model.fit(data = exp_pHs, eqs = exp_eqs, params = pH_params, base_i = exp_eqs[0]) # Specify the colors and borders to use in the worksheet font_color = Font(color=&#39;FFFFFF&#39;) pink_fill = PatternFill(start_color=&#39;FF9194&#39;, end_color=&#39;FF9194&#39;, fill_type=&#39;solid&#39;) cell_border = Border(left=Side(style=&#39;thin&#39;), right=Side(style=&#39;thin&#39;), top=Side(style=&#39;thin&#39;), bottom=Side(style=&#39;thin&#39;)) # Label the header cells and format ws[&#39;C1&#39;] = &#39;fit&#39; # column heading ws[&#39;C1&#39;].fill = pink_fill ws[&#39;C1&#39;].border = cell_border ws[&#39;D1&#39;] = &#39;parameter&#39; ws[&#39;E1&#39;] = &#39;value&#39; ws[&#39;F1&#39;] = &#39;uncertainty&#39; # Loop through each value in the best fitting line and add it to the corresponding cell for i, value in enumerate(pH_fit.best_fit): ws[f&#39;C{i+2}&#39;] = round(value, 3) # set the value ws[f&#39;C{i+2}&#39;].number_format = &#39;0.000&#39; # ensure we have three decimal places showing ws[f&#39;C{i+2}&#39;].fill = pink_fill ws[f&#39;C{i+2}&#39;].border = cell_border # Add the best fitting parameters to the corresponding cells and format ws[&#39;E2&#39;] = f&#39;{pH_fit.params[&#34;pKa&#34;].value:.3f}&#39; ws[&#39;F2&#39;] = f&#39;{pH_fit.params[&#34;pKa&#34;].stderr:.3f}&#39; ws[&#39;F2&#39;].font = font_color ws[&#39;F2&#39;].fill = PatternFill(start_color=&#39;51154A&#39;, end_color=&#39;51154A&#39;, fill_type=&#39;solid&#39;) # Get a variable for the chart in the worksheet chart = ws._charts[0] # Create reference to the data to be plotted in the chart x_values = Reference(ws, min_col=1, min_row=2, max_col=1, max_row=len(pH_fit.best_fit)+1) y_values = Reference(ws, min_col=3, min_row=2, max_col=3, max_row=len(pH_fit.best_fit)+1) # Create a data series for the data fit_data = Series(y_values, x_values, title_from_data=False) # Set the data series line color and width fit_data.graphicalProperties.line.solidFill = &#39;FF9194&#39; # Red color fit_data.graphicalProperties.line.width = 28575 # Width in EMUs (1 pt = 12700 EMUs) # Add the fit line to the chart chart.series.append(fit_data) # Save the changes back to the same file quickPopupMessage(message = &#39;Choose a filename to save the new Excel workbook.&#39;) save_filename = quickSaveFilename(filetypes = &#39;Excel Workbooks, *.xlsx&#39;) wb.save(save_filename) Solutions to Exercises Targeted exercises Reading and writing excel files using openpyxl Exercise 1 Go get the ‘Stress-Strain.xlsm’ file from our website (website).</description>
    </item>
    <item>
      <title>Chapter 16</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-16/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-16/index.html</guid>
      <description>Data Download Data for Chapter 16&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter &#39;&#39;&#39; Recolor thermal images and add them on top of photos or other visual images Requires: two images in .jpg format Written by: Ben Lear and Chris Johnson (authors@codechembook.com) v1.0.0 - 250527 - initial version &#39;&#39;&#39; import numpy as np from PIL import Image from codechembook.quickTools import quickOpenFilename, quickPopupMessage # Create a new color scale for recoloring the pixels new_scale = [] for i in range(3): # R, G, and B for j in range(256): # 8 bits each color = [0, 0, 0] # start with black for k in range(i): color[k] = 255 color[i] = j new_scale.append(color) # Open the thermal image and convert it to a 2d array quickPopupMessage(message = &#39;Select the thermal image file.&#39;) original_thermal_image = Image.open(quickOpenFilename(filetypes = &#39;Thermal Image file, *_thermal.jpg&#39;)) original_thermal_data = np.array(original_thermal_image) # Make a new 2d array of zeros to hold the recolored image new_thermal_data = np.zeros_like(original_thermal_data) # Determine the shape of the image in pixels x pixels width, height = original_thermal_data.shape[0:2] # Loop over pixels and set color of new one based on luminosity of thermal for i in range(width): for j in range(height): current_color = original_thermal_data[i, j][0] # get the luminosity of the pixel new_thermal_data[i, j] = new_scale[current_color*3] # find corresponding color in new color scale # Convert new data to image new_thermal_image = Image.fromarray(new_thermal_data.astype(&#39;uint8&#39;)) # Combine new thermal image with visual image and crop quickPopupMessage(message = &#39;Select the visual image file.&#39;) old_visual_image = Image.open(quickOpenFilename(filetypes = &#39;Visual Image file, *_visual.jpg&#39;)) #open visual image result = Image.blend(new_thermal_image, old_visual_image, alpha=0.3) # Blend the images together cropped_image = result.crop([0.4 * width, # left 0.2*height, # top width - 0.1*width, # right height - 0.42*height]) # bottom # Show final combined and cropped image cropped_image.show() Solutions to Exercises Targeted exercises Understanding how computers represent images Exercise 0 Create an array of 1s and 0s where the arrangement represents a picture of a frog.</description>
    </item>
    <item>
      <title>Chapter 17</title>
      <link>https://codingforchemistsbook.com/book_material/chapter-17/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>authors@codingforchemistsbook.com (Benjamin Lear and Christopher Johnson)</author>
      <guid>https://codingforchemistsbook.com/book_material/chapter-17/index.html</guid>
      <description>Data Download Data for Chapter 17&#xA;Alternatively, individual files can be found in the Data section.&#xA;Code from chapter Code for this chapter will appear here.&#xA;Solutions to Exercises Exercise 1 There are no exercises for this chapter. Congratulations on completing the book!&#xA;import numpy as np import plotly.graph_objects as go theta = np.linspace(0, 2*np.pi, 200) f_x = np.cos(theta) f_y = np.sin(theta) left = (-0.3, 0.4) right = (0.3, 0.4) m_theta = np.linspace(0, np.pi, 100) m_x = 0.5 * np.cos(m_theta) m_y = -0.4 + 0.2 * -np.sin(m_theta) m_x = 0.66*np.cos(theta) m_y = -abs(0.66*np.sin(theta)) fig = go.Figure() fig.add_scatter(x=f_x, y=f_y, #fill=&#39;toself&#39;, mode=&#39;lines&#39;, line_color=&#39;black&#39;, #fillcolor=&#39;yellow&#39; ) fig.add_scatter(x=[left[0]], y=[left[1]], mode=&#39;markers&#39;, marker=dict(size=20, color=&#39;black&#39;)) fig.add_scatter(x=[right[0]], y=[right[1]], mode=&#39;markers&#39;, marker=dict(size=20, color=&#39;black&#39;)) fig.add_scatter(x=m_x, y=m_y, mode=&#39;lines&#39;, line=dict(width=4, color=&#39;black&#39;)) fig.update_layout( title=&#34;Congratulations!&#34;, template = &#34;simple_white&#34;, paper_bgcolor=&#39;yellow&#39;, plot_bgcolor=&#39;yellow&#39;, xaxis=dict(showgrid=False, zeroline=False, visible=False), yaxis=dict(showgrid=False, zeroline=False, visible=False), width=500, height=500, margin = dict(l = 40, r = 40, t = 40, b = 40), showlegend=False ) fig.update_yaxes(scaleanchor=&#34;x&#34;, scaleratio=1) fig.show(&#34;png&#34;)</description>
    </item>
  </channel>
</rss>