Wednesday, January 27, 2016

Supreme court and electricity, what's the big deal? An intro to the electrical grid, demand response, and the highest court's opinion.

You may have heard about the Supreme Court making a ruling which greatly benefits demand response providers.  It's a very interesting issue, so here's a bit more information.  I'll start with a little background.

Here is a link to the text of the ruling.  It is an interesting read, and incredibly well written.

Background on the grid

The amount of electricity that is consumed has to be met by the power generators; as more power is used, more must be generated.  When the demand for electricity is at a minimum, it is known as base load, and this is the cheapest power to produce.  As the electrical demand increases, more generators must be turned on, and the electricity they produce is more expensive.  When electrical demand is at its highest in the middle of the day, it is known as peak demand.  Generating electricity for peak demand is the most expensive.

To meet peek demand, power plants must be built which can meet the required load, even though they may be rarely used.  This is most common during heat waves in the summer, nonetheless, the station must be built even if it only used a day in a year.  This is one of the reasons peak electricity is expensive.

The issues with matching electrical supply to demand become a bit more complicated when considering renewable technologies.  Unfortunately solar and wind power do not turn on when needed, as a natural gas turbine will.  This makes the problem a bit more tricky, and makes peak demand even more challenging to predict.  Because of this, real time energy monitoring and reduction is an important strategy.

Demand Response

What it is


Since peak demand is so expensive, it seems logical to think that if we could spread out our energy usage more smoothly over the day, we could make it cheaper.  That is the idea for demand response.  From the standpoint of the energy distribution utilities, there are really three options to deal with peak demand.

(1 ) Expensive electricity can be purchased from the suppliers.
(2) Store energy during off peak times and use it during on peak times (smart grids).
(3) Work with consumers to help lower peak demand (demand response).

Currently smart grids can only provide for a small portion of peak electrical needs, so in reality it is a combination of options 1 and 3 (demand response and expensive electricity).

How it works

Demand response plans are carried out on a case-by-case basis.  Every business has a different strategy that is tailor made to suit their needs, while making sure business demands are met.  A hotel may turn off some ice machines, as well as waiting to run dishwashers and clothes dryers.  There are various steps all sorts of businesses can make, and having the real time energy monitoring is the key.

Economics of demand response

To the consumer
The economics of supply and demand do not work with retail electricity in the same way they do with other commodities.  In most markets there is a clear correlation between available supply and demand.  With consumer electricity usage is generally billed with a constant price with different values for peak and off peak usage, which is determined by the time of day.  This model insulates the consumer from the true cost of the electricity.

Demand response is a way of creating more incentive for the consumer to shift their energy consumption away from peak load in real time.  This saves money for both the consumer and electrical utility company, and also aides in stabilizing the balance between supply and demand in the grid.
 
To the utility
When the utility purchases power, the pricing scheme is a bit unusual.  It was explained with great eloquence in the ruling, but I will paraphrase.  When a local utility buys power from a producer, every producer receives the highest bid.  Take an example in the morning on a hot day where a power company receives two bids, which when combined will supply the current power requirements.  The first bid is $30/unit, and the second is $50/unit.  The local utility will then pay $50/unit to both producers.  Then later on this day, the heat spikes and everybody is using air conditioning.  The utility will then have to buy more power at a high rate.  To meet demand the producer has to pay $80/unit for this energy.  Because of the way the pricing works, the utility now has to pay $80/unit for all of the power they are consuming.  This inelasticity in the market adds to instability in the electrical grid.

This strange pricing scheme makes it even more important for the utilities to incentivize reducing load during peak demand.  That is one reason why demand response is a big deal.  Electrical utilities will pay for demand response in the same way they would pay for a bid for power from a producer.  Because of the steep markup on purchasing peak power, it is cheaper for the utility to pay for demand response.  Demand response is considered virtual power because it is effectively the same as more power generation.  So instead of paying for traditional power, utilities will purchase virtual power, essentially paying for electrical reduction.  By doing so, this process creates some level of elasticity in the market

Now the case

These are the parties involved
 
 This case was Federal Energy Regulatory Commission (FERC) v. Electric Power Supply Association (EPSA)

FERC  is a commission in charge of wholesale and interstate power distribution, created by the Federal Power Act.
EPSA represents the companies which produce electricity.

In 2011 a rule was made by FERC which determined wholesale demand response savings should be equal to rates that are paid to companies for the electrical generation.  From the pricing structure the utilities use to purchase power, it is highly favorable for the suppliers to supply the peak load.  When the utilities are not purchasing the peak load, the suppliers are missing out on the highest price paycheck.  Demand response cuts directly into the bottom line.

Arguments

In short, EPSA argued  that FERC overstepped their bounds when they made the rule.  FERC only has jurisdiction in the wholesale market.  It was argued that the rule set in place was illegal on the bounds that it created a retail market, which are regulated to the states.

This case is brought before the Supreme Court, after a D.C. Circuit court ruled in favor of EPSA.  They determined that demand response should be relegated to the states.  The case was appealed, then brought to the nation's highest court and accepted.

Decision

In a 6-2 decision, the court ruled in favor of EPSA.  The court ruled that the regulations set forth by FERC did not directly affect retail rates.  It was discussed how retail rates would be indirectly effected, as they would in any market when wholesale rates changed.  It was stated that not allowing FERC to enact such regulations would be in direct conflict of the Federal Power Act, in which Congress gave power to FERC to make these decisions.

Conclusions

This was a very important case, which will help ensure the viability of demand response programs and providers.  We are now one step closer to having these programs become common in households, empowering families to take advantage of savings, and continually help stabilize the grid.  With the clear energy savings it is only a matter of time until these techniques are brought to the mainstream.  As intermittent renewable energy becomes more widespread, demand response will help ensure the grid is stabilized so that we can maximize our renewable energy production, and continue the trend to rid the need for fossil fuels.


 







Wednesday, December 16, 2015

Gasification Product Modeling Compared To Experimental Findings

Gasification is an important technology that will help utilize agricultural waste products to meet growing energy demands.  The high variability in biomass feedstocks causes high variability in the composition and quantity of biogas produced.  The study analyzed in this post (citation and link at the bottom) derives a model to predict the gasification products, and compares them with experimentally obtained data. 

Why is this important?

Gasification of biomass is becoming more widely used in small scale local production.  Besides having a close to “net zero” carbon impact, it is widely available, and under-utilized.  The low energy density, which is inherent in many agricultural and industrial waste products are ideal for gasification.  This makes proper utilization even more important.   To be considered a clean technology, gasification must take place in a manner that does not create significant atmospheric contamination.  This is done by achieving more complete utilization, as well as lowering concentrations of undesirable by-products. Having a computer model which can effectively estimate biogas components for specific conditions makes more ideal gasification easier to achieve.  Models have been previously created for such situations, however discrepancies under certain conditions have been large.  This work builds upon previous models, and aims to lower the variability between the simulation and experimental results.


What they did:

This took place in three distinct parts, which were modeling, gasification and comparing.
1)A mathematical model was derived to predict the constituents of the resulting biogas produced from gasification of pine wood.
2) A fluidized bed gasifier was used (utilizing pine wood) to create biogas for analysis in a gas chromatograph.
3) Results from each part were compared to each other to determine the efficacy of the model created.

Model

The model was mostly built upon the work of four previous studies.  Without delving too deep into the derivation, the main assumptions were as follows:
-   De-volatilization time was assumed negligible (based on previous work).
-   Fluidized bed only consists of emulsions and bubbles with a single coefficient governing mass           transfer.
-   Gas in bubble and emulsion phase was assumed to act as a plug flow.
-   Solids in emulsion phase were underwent complete mixing.
-   Wake and cloud region of bubbles in fluidized bed were neglected.
-   Gas in the emulsion phase ascends at the minimum fluidization velocity.
-   Final products are calculated using ideal gas law.

With these assumptions, as well as equations deduced from previous research, the model was created.  Matlab was used to solve the differential equations and obtain numerical values for the results.


Gasification

The apparatus (broken down into upstream, gasifier unit, and downstream parts)

Upstream:
The vapor feeding system consisted of water (combined with a heater for steam generation), high pressure air, high pressure nitrogen, as well as valves (to control flow) and flowmeters to measure.  Biomass feeding was done though the top of the unit so the particles could almost instantaneously be introduced into the fluidized bed.

Gasifier:
The researchers used a small (80mm internal diameter)  fluidized bed gasifier, with wood as the biomass.  The gasifier was split into two zones which were physically separated, allowing the combustion zone to be fully excluded from the gasification.  The zones were separated as a means to ensure the syngas produces was high quality, and nitrogen could be supplied without stifling the combustion zone, mitigating the need to use pure oxygen in the gasification environment.

Downstream:
Post gasification, the flow entered an array of heated ceramic candle filters to remove entrained particles.  Post filtering the gas flowed through a cold trap to bring the stream to room temperature for analysis.  At room temperature the composition was measure using a GC-MS unit.

Procedure

Charcoal was used to heat the bed (isolated zone) to 750°C-800°C.  The testing took place in two phases, pyrolysis and combustion.  The pine wood was first pyrolized, and subsequently the remaining material burned.  First, during pyrolysis testing, nitrogen was used as the fluidizing agent.  The downstream flow was measured for the composition of H2COCH4, and  CO2,, and time averaged values were reported.  Once pyrolysis was complete, the fluidizing gas was switched to air, and the residual material was combusted.  The resulting gas was then analyzed for CO and CO2 as well as TAR's (which were broken down into four sub-groups).  Gasification was also completed using steam entirely.


Validation

The model was quite accurate at depicting constituents of the final gaseous mixture.  The difference between experimental findings and numerical analysis for yield was less than 5%, and less than 2% for composition.  Unfortunately at high temperatures the model has larger discrepancies (about 20%) for various TAR compounds, which are largely comprised of benzene.


Thoughts

The ability for the model to accurately calculate the quantity as well as composition of producer gas is quite impressive.  As more biofuels are utilized, consistency will become an ever more important issue.  Using gasifiers as a means to more cleanly utilize various forms of biomass is a great strategy, but having large differences in gasses produced is a problem if the content is not previously known.  As with other models, surely in the future this will be refined and optimized.

One major problem with biofuels in general is reliability.  The stringent requirements for petroleum fuels that are used commonly make reliability not a big issue currently.  Drivers don't have to worry if their fuel is going to "gunk up" engines, or combust only within a specific temperature range.  Unfortunately these issues are more common in biofuels.  It seems one way to deal with the reliability in liquid fuels would be to gasify the components, then once they are broken down into their constituents, they can be reformed into a reliable fuel.  Although this model looks at the products based on a biomass input, further work could be done to generalize the work.  That would make it easier to turn biomass into reliable liquid fuel.


Citation
Vecchione,L., Moneti, M, Di Carlo,A., Savuto, E, Pallozzi, V., Carlini, M., Boubaker, K., Longo, L., Colantoni, A. (2015) Steam Gasification of Wood Biomass in a Fluidized Biocatalytic System Bed Gasifier: A Model Development and Validation Using Experiment and Boubaker Polynomials Expansion Scheme (BPES. Int. Journal of Renewable Energy Development, 4(2), 143-152, doi : 10.14710/ijred.4.2.143-152

Article Link
http://ejournal.undip.ac.id/index.php/ijred/article/view/8641/PDF

Sunday, November 15, 2015

Small scale energy optimization


Matching electrical supply with demand is an important aspect of reliably powering homes and industry; electric renewable sources make this issue more problematic.  Hybrid utilization of sustainable technologies when used optimally can help alleviate stress on the grid during peak usage.  This post will look at a study which uses computer analysis to maximize the return from hybrid systems, both in terms of finances, as well as CO2 emissions.  The article is titled “A New Method for Energy Saving in a Micro Grid”, published in the journal “Sustainability”. The link as well as citation can be found at the bottom of the post.

Why is this important?

Using carbon free technology is the cleanest way to meet our electrical needs.  The generation of electricity from wind turbines and photovoltaic cells make carbon free energy generation possible.  The inherent indeterminacy with these technologies makes matching demand a challenging problem.

When electrical supply is insufficient to meet demand on the grid, blackouts and brownouts occur.  Due to the variability in renewable electric sources of energy, careful management must be used to ensure the correct amount of power is being generated.  Unlike combustion applications, renewable sources of electric energy are not varied by factors humans control.  When the wind is slow, or when the day is cloudy, electricity must be supplied by other means.  Besides the intermittency of sustainable sources, there is the issue with storage.  Unlike liquid, solid and gaseous fuels, electricity is not commonly stored.  Using hybrid multi-energy configurations, demand can be matched easier which would lead to stabilization in the electrical grid.  As we transition to more sustainable sources, the problems associated with instability will become much more pronounced, and smaller scale stabilization, will become more important.

Use of sustainable electric technologies on small scales is generally done with a single system.  The need for meeting peak power usage, requires large systems, which incur hefty capital costs when building a system to handle peak power requirements.  Very little attention has been paid to using multiple sources in conjunction with storage due to the inherent complexity.  Case studies have been analyzed where small scale hybrid technologies have been utilized, and they have shown to be more cost effective than standalone technologies.

As developed as most of the earth is, there is still a large portion of the world for which electricity is not accessible.  Besides regional poverty, another issue that makes connecting to the grid lies in the remote areas of many communities.  Using transmission lines connecting to existing power grids is highly unfeasible, so other solutions must be constructed.  As mentioned before, when designing a simple sustainable system (single power generating technology) the station must be built to withstand high demand, which in areas where cost is of the utmost importance, leads to difficulties.  Mitigating capital costs, as well as operational expenses must be done effectively.



The Experiment


What they did
Using a computer simulation to model a real building with large power usage, a model was analyzed; two separate software packages were used.  TRNSYS was used to model the thermodynamics of the various components of power generation, distribution (as process heat and electricity), and simulating building energy flow.  HOMER (which is widely used in research) was used to optimize the system configuration with a time varying approach.  It works with time steps of an hour, and for each step solves for the necessary power from each component.  Using considerations for cost of energy production, cost of investment, maintenance and fuel purchase, the software decides which conditions are optimal to minimize financial expenditure.  The building previously contained a large diesel boiler (400 kW) which was used for hot water as well as heating, as well as two large refrigeration units (140 kW each).  Using previous transient analysis of this specific building, the thermal and cooling loads of the system were identified for an entire year.
  
For the simulations, a multi-energy system was implemented to compare costs as well as CO2 emissions to the current system.  The energy generation implemented in the simulation (for heat and electricity) included three items. A combined heat and power internal combustion engine was used, which was fueled by natural gas (100 kW, 150 kW and 200 kW options were explored) was utilized for electricity as well as heat.  Photovoltaic solar panels with peak power of 100 kW were also used for electricity.  A natural gas boiler was also used for heat generation. The systems were evaluated on a 20-year scenario so the factors such as capital cost could be considered more accurately than simply by using marginal cost.

Six different configurations for energy production were explored. The configurations had varying amount of power produced from the CHP station, as well as supplemental grid power.  Data was collected for each run pertaining to net present cost of operation, proportional quantity of electricity from each source, thermal load distribution, cost of energy, and CO2 emissions.

For the analysis of energy storage, three systems were looked at.  Battery storage (Pb-Ac), flywheel, as well as compressed air (Caes). 

What they found

Energy production
The first important result is that every single configuration used in the model was more cost effective than the current situation. 

Part of the efficacy of these scenarios came from more efficiently meeting thermal load requirements.  Process heat from the CHP unit was almost sufficient to meet all of the requirements of the building.  Even in the configuration with the smallest CHP unit, there was less than 20% thermal load being generated from the boiler.  It was noted that the boiler was generally used when the need for electricity was so low that the CHP generator was not running.

On a per unit basis, the cost of energy reduced from  0.17 €/kWh (present configuration) to as low as  0.133 €/kWh, which was achieved with the largest CHP unit (200kw).  This situation is extremely dependent upon the price of natural gas.

Three of the scenarios were highly dependent on the price of electricity, two were highly dependent on the price of natural gas, and one was a more balanced situation.  It turned out the balanced scenario was less than 2% more expensive per kWh than the cheapest (which, as noted above, was highly dependent on natural gas prices). 

When looking at CO2 production, all of the situations explored were significantly lower than the current configuration.  There were three main reasons for this.  Using natural gas instead of diesel fuel produces significantly less carbon dioxide per unit electricity produced.  Another key factor in reducing emissions was the solar panels which are generated carbon free energy.  Finally, using process heat from the CHP unit is much more efficient than blowing hot exhaust gasses directly through exhaust without utilizing some of the heat.

Energy storage
Due to the short lifespan of the batteries, the cost of energy they produce, operating costs were the most expensive for the batteries.  The flywheel had the highest capital investment, it has a very long lifespan, as well as a high conversion efficiency.  Compressed air was the least expensive to operate, however, it has a very low conversion efficiency.
The energy storage mechanism had the effect lowering demand on the grid during peak times.  This has a positive effect on main grid stability, as lowering demand during peak hours helps maintain a smaller gap between base and peak strain.

Thoughts
Although the energy storage helped take strain off the grid (by moving peak usage away from peak grid demand) it did not seem to be nearly as cost effective as creating the electricity on demand (although this was dependent on current natural gas prices).  The cost was about 30% higher, which becomes more economically feasible in situations where a significant price gap exists between peak and off peak electricity.  As intermittent power generation supplies a larger chunk of electrical production, the importance of reducing peak fluctuations will become of the utmost importance.  The inherent economic losses that are involved with electrical storage (both by capital investment as well as system inefficiencies) make it somewhat unattractive for individual micro grids to take on the costs.  It will be important to legislate such a difference in prices to make these technologies more viable.  

Although it is not new information, this study helps show the importance of using process heat.  By using natural gas (which burns hotter than diesel), the exhaust is able to be effectively utilized as heat.  Unfortunately mining for natural gas is highly destructive to local ecosystems and is known to contaminate groundwater heavily.  Although fracking is not done everywhere, it is a very common procedure which does not seem to be ending soon.   This seems highly applicable to rural communities without access to a main grid for a couple reasons.  Besides the obvious reasons about supplying electricity to areas where there is a lack, using a gaseous fuel as wells photovoltaic cells as a means to generate electricity areas can locally utilize available resources.  For most rural areas, even where natural gas would be difficult to obtain, biomass could be gasified and used in such a system.

It would be interesting to see the results of this study carried out using values for biogas. Even though it has a lower methane content, it would likely still be highly effective in a CHP system.  This could also be highly applicable to rural areas, which commonly have access to biomass.  Combining gasification of biomass in this approach would be a great way to mitigate costs as well as atmospheric carbon generation.  Even if biomass is used (with a near net zero carbon emission) it could be taken one step further.  Although this seems more applicable to larger scale operations, if waste treatment is processed locally to the generation, algal farming can be done as a means to process waste, and sequester carbon; for now, however, it may be best to expand the processes one step at a time.

What do you think?

Any comments or thoughts are appreciated!






Andrea Vallati, Stefano Grignaffini, and Marco Romagna. “A New Method to Energy Savings In a Micro Grid” Sustainability 7 (2015) 13904-13919. doi:10.3390/su71013904

Wednesday, November 11, 2015

Viability of using 'loose' groundnut shells as biomass feed stock for gasification


This post will be looking at an experiment published in the "International Journal of Renewable Energy Development", volume 4 issue 2.  The article is titled "Gasification of 'Loose' Groundnut Shells in a Throatless Downdraft Gasifier."  In the study, edible nut shells were used in a gasification system to determine the viability of using such feed stock in their natural 'loose' form to create a syngas for heating and for use in internal combustion engines.  Citation and link to article are at the bottom of the article.

Background:
As mentioned in the previous posts, the ongoing effort towards sustainability will take a multi-faceted approach.  Not any one technology will "save" us from the crisis we face.  It is vital that all our resources are utilized fully, and managed in a way that has a minimal environmental impact.  Biomass is a sustainable, cost effective way to use resources, especially agricultural waste.

Biomass in general is one of the most utilized energy resources in the world (third after coal and oil).  About 14% of the world's energy usage is derived from biomass, and in developing countries it is an even more significant proportion(35%).  It is a cost effective way to produce renewable energy, while reducing net greenhouse emissions (as a substitute for fossil fuels).  Unlike other fuels, failure to utilize biomass releases greenhouse gasses, and contaminates groundwater aquifers (upon their natural decay).

Groundnut shells (such as peanut shells) are a woody fibrous material which is generally discarded after the nuts are separated.  They are commonly dumped on the sides of roads, or burned to reduce waste and make management easier.  Kenya (where this study was carried out) is the fourth largest worldwide producer of groundnuts, and given the rural and economic nature of the country, consolidating agricultural waste for use in large industrial systems is not feasible.

Gasification is a thermo-chemical process by which a carbon based fuel is processed into a gaseous form.  Using high temperatures, and limiting oxidization of the reaction, the material is mostly unburned, and it is transformed into a combustible gas, known as a "syngas" or "producer gas". There are multiple steps by which the process occurs, however, the specifics of the process are worthy of their own post, and not vital to understanding the study.  By using gasification instead of direct combustion, higher thermal efficiency can be achieved, smoke can be attenuated, and the gas can be used in internal combustion engines.  The resulting gasses produced can be much more easilly "scrubbed" of contaminants pre-combustion making the process much cleaner than direct combustion.


  There are many types of gasifiers (updraft, downdraft, fluidized bed etc.) with the main variations being how the gasifier is heated, where the fuel and oxidizers are fed, as well as how the fuel interacts with the flow of oxidizer. The study we will look into used a "throatless downdraft gasifier" for a few reasons.  Although throated gasifiers are able to more efficiently generate syngas, using loose biomass (such as groundnut shells) introduces a large concentration of undesirable volatiles (tar) into the final product.  These gases precipitate after combustion and form a "gunk" that makes them unsuitable for internal combustion applications. For this reason, a downdraft gasifier was used because it is known to minimize these contaminants.

Why is this important?
This study was carried out in Nigeria which is the fourth largest groundnut producer in the world.  Utilizing this resource effectively will help reduce needs for fossil fuels, increase agricultural value, as well as reducing pollution by smoke and greenhouse gas emissions.

Throated downdraft gasifiers (which most of the research has been done with) have flow problems with loose material (such as groundnut shells).  These issues affect the quality of the syngas negatively, thus loose material is generally pelletized to maintain uniformity within the gasification chamber.  This study utillized a throatless downdraft gasifier to determine the quality and quantity of the syngas produced from such a system, thus giving insight into larger scale usage.

The Experiment

What they did
A small 5 kW laboratory sized throated downdraft gasifier was used.  A gas cleanup unit (comprised of a series of filters) was placed downstream of the gasifier, and upstream of a sample port.  The producer gas was flared (burned) when it was not being sampled.  Air was used as the oxidizer; it was inputted by use of an air compressor.  Flow was controlled using a valve, and measured using a rotameter (flowmeter).  Thermocouples were used to measure temperatures in the pyrolysis zone as well as oxidation zone.

The gasifier was started using charcoal,oil and paper to start the combustion in the oxidation zone.  Once the zone was at sufficient temperature, and excreted gasses were opaque, shells were loaded into the gasifier.  Unlike commercial gasifiers which use a continuous loading of fuel, this setup was run in batches.

What they measured
Using a gas chromatagraph, the N2, CO, CO2, H2 and CH4 composition of the syngas was measured.  The performance of the gasifier measured using independent variables of gas flow rate, air to fuel ratio, equivalence ratio and groundnut shell consumption rate.

What they found

  • Content of gas was very similar to gas produced from wood.
  • Ash content was low, making the gasifier well suited for continuous use with minimal cleaning.
  • Gas produced had low enough volatile content to be used directly in an internal combustion engine.
  • Low density of the shells was not too low to be used well in the gasifier setup.
  • Producer gas was on the high end of documented values for the downdraft gasifier systems.
  • The ideal air flow in this experiment was 0.0071 m3s-1  through the reactor.  Above and below that flow rate two things occurred.  First, the composition of combustible to gas (CO, H2, CH4) to non-combustible gas(CO2, N2)was lower.  The other effect of increasing or decreasing the flowrate past 0.00071 m3s-1 was a decrease in the heating value of the gas produced.  So, not only was there less combustible gas produced, the gas produced was less effective.

Thoughts


There were a few things that struck me from this article besides the grammatical errors, ambiguous references to gas flow rate (weather produced or oxidizer) and references to incorrect figure numbers.

The ability to use a less dense material could have helped optimize conditions so that a very large portion of the shells was transformed into combustible gas. This could be part of the reason the ash content was noted to be so low, and the heating values were on the high end of the values that are seen for this type of system.  It seems plausible this type of gasification could be used well for similar loose, low density biomass material (which is not ideal in other style gasifiers) and would not be limited to groundnut shells.  

Regardless of environmental concerns, it would seem this type of utilization of waste biomass would be particularly useful for less developed oil importing countries.  Utilizing waste products effectively would cut down on import spending, as well as putting more money into the local economy would seems like a win-win.  Since these products would otherwise be burned without being utilized to create fuels or discarded on the side of a road or burried, there is a carbon neutral effect.   By using the shells to produce energy through gasification, the greenhouse gasses that would otherwise be released from the natural decay can be avoided, excess smoke is mitigated, while simultaneously producing energy as well as increasing agricultural value of the product.

It makes a lot of sense these professors carried out this study given the production rate of groundnuts in Kenya, and the rural nature.  It was noted that commonly the shells are discarded on the sides of the roads so it would be common for people on the roads to see them as waste.  Since agricultural "waste" would commonly be seen as such, publicizing this locally would be a great way to help bring the ideas of biomass conversion to a place where it is far from mainstream.  The environment as well as the economy would see positive effects.

What do you think?

Please mention any thoughts you have relating to this article or biomass gasification in general!


Citation
Kuhe, A and Aliyu, S.J. (2015) Gasification of 'Loose' Groundnut Shells in a Throatless Downdraft Gasifier. Journal of Renewable Energy Development, 4(2), 125-130.
http://dx.doi.org/10.14710/ijred.4.2.125-130

Wednesday, November 4, 2015

Optimal conditions for algae growth


In this post we will look at an article in the "Biofuel Research Journal" where researchers explore various factors that affect the growth of a specific strain of algae as well as the lipid content.  The article citation and a link to the article are at the bottom of this post.

A little background
Microalgae has great potential as a biofuel feedstock.  There are many reasons why this is the case but four stand out among the rest:
·         Microalgae has the ability to grow in places that are unsuitable for traditional agriculture, either due to climate or soil conditions.
·         Growing microalgae can be accomplished in a much smaller area than any other biomass.
·         The small footprint along with other factors make it highly suited for sequestering carbon dioxide.
·         Microalgae produce lipids instead of sugars, which makes the process of transforming them into a useable fuel much simpler. Instead of creating ethanol from sugars that must be mixed with petroleum products to be used in traditional vehicles, the lipids can be easily, and inexpensively, processed into a biodiesel. 

Why this study is important
If algae are to be successfully grown commercially for use as a biofuel feedstock, optimal growth conditions must be identified.  Compared with crude oil, producing green crude (as it is known) from algae is currently more expensive.  Minimizing expenses by increasing production efficiency will be key in making this a viable fuel source.

The Experiment

What they did
Based on previous research, the researchers determined that the Chlorella species of algae was the most suitable candidate for growth for biofuel conversion.  This was based on the growth rate in atmospheric conditions and the ability to be cultivated in open ponds.  

Four factors known to alter the growth rate and lipid content were identified, and used for independent variables.  These factors are light intensity, CO2 concentration, nitrogen content (used NaNO3 as a nitrogen source), as well as aeration rate.  Two values for each variable were used, with eight total variations.

Using the Taguchi method, the experimenters compared these independent variables without having to use separate experimental apparatuses to explore each variable.  This allowed the researchers to use eight experimental setups instead of 48, saving time and money.  A loss function, which calculates a signal/noise ratio, was used to qualify the validity of results.

An enclosed photobioreactor was used for each apparatus, with halogen bulbs as the light source.  A schematic can be viewed in the study (link at the bottom)


What they found

Effect on biomass productivity
Biomass productivity was almost four times more active in the most active system when compared with the least active.  Results showed the most important condition (within the parameters of experimental values) was nitrogen content, followed by CO2 concentration, with light intensity being less significant.  It was found that aeration had no statistically significant effect on biomass production.



Effects on lipid production
The conditions used varied lipid content of the algae from 8.6% to 19.7%.  The most important factor influencing lipid concentration was nitrogen content, with aeration playing a smaller but significant role; light intensity as well as CO2 concentration were not found to be statistically significant.  The lipid content dropped appreciably with increased nitrogen within the bounds of the experiment.



My Thoughts


One part of the analysis that struck me was the complete separation of optimizing biomass and lipid content; both situations were analyzed independently of each other.  Given that the end goal is to process the algae using a transesterification reaction, the total quantity of lipids is an important metric to be evaluated.  Below this paragraph I have included a chart (Chart I, Figure I) which does so.  I multiplied the average biomass production by the average percentage of lipids for each condition.  Although experiments 2&3 account for the highest values for biomass production and lipid concentration respectively, this shows that experiment 6 would be the most productive for the end goal of creating the most biofuel.  
  
         Experiment         Average biomass [mg/(L*day)]         Average lipids %         Average lipids [mg/(L*day)]
1 61 13.7 8.357
2 210.9 8.2 17.2938
3 85.9 19.7 16.9223
4 105.4 8.6 9.0644
5 54.3 16.2 8.7966
6 196.9 11.2 22.0528
7 67.2 12.3 8.2656
8 72.9 9.1 6.6339
While CO2 concentration and light intensity alter the rate of production, they are statistically insignificant when looking at lipid concentration.  Conversely, while aeration affects lipid content, it is statistically insignificant when dealing with biomass production.  It follows that three of the four variables are significant in either the biomass production or lipid concentration, but not both.  With that information, it is clear which values are ideal for production.  Alternatively, increased NaNO3 concentration was effective in generating more biomass, but reduced lipid content; there were competing effects, which makes looking at a single metric (total lipid mass) even more important.  This way the end goal can be directly compared to the inputs.

Experiment 6 had the most productive levels of all the variables except aeration.  It seems logical that experiment 6 would be even more productive if it used an aeration rate of 3.33 vvm with all the other variables kept the same.

The values used for light intensity were a bit interesting to me as well.  The highest light intensity used was 14.5 klux, which corresponds closely with a shaded area on a bright sunny day.  It seems that it would be more realistic to use a value that would be incurred with direct radiation on a sunny day.  My guess is that these values are based upon data previously available, that the light intensity of direct afternoon light is stifling to production.  This is backed up by the fact that the lower intensity situations actually corresponded to about a 17% increase in production.  Given the small number of data points, more investigation is needed to know if there is maximum production between the two values.

The fact that lower intensity light makes the microalgae thrive seems to be beneficial, at least for closed systems.  Instead of building flat ponds, it seems plausible that an array of angled photobioreactors could be used so that area is maximized while keeping light intensity to a reasonable level.

It would be interesting to see how the wavelength of the light used affected production as well.  This experiment was done with a halogen bulb,which is reasonably close in the spectrum of the sun, but nonetheless has a different spectrum.  It is possible that different wavelength spectrum contained in sunlight would affect the optimal intensity.

Another area that may warrant additional investigation is the idea of using these variables in a time varying approach. The authors cited a couple different studies concluding that certain conditions, such as low medium homogenization and nutrient deprivation, can cause the algae to produce a higher lipid content as a response to stress.  This effect comes at the cost of biomass production.  It would be interesting to see how this could be used with a time varying approach.  It seems plausible that conditions favorable to biomass productivity could initially be used in the algal lifecycle, by growing the culture to size, with the later part of the cycle becoming focused on maximizing lipid content.  The efficacy of such an approach would highly depend upon the rate at which nutrient deprivation affects lipid concentration.



Let's discuss this!

Please comment with any thoughts you have; wheather it is about the study, its implication, methodology or my comments, all comments are welcomed!






Article citation:  Aramal M.S., Loures C.C., Da Ros P.C.M., Machado S.A, Reis C.E.R., de Castro H.F., Silva M.V. Evaluation of the cultivation conditions of marine microalgae Chlorella sp. to be used as feedstock in ultrasound-assisted ethanolysis. Biofuel Research Journal 7 (2015) 288-294.  DOI: 10.18331/BRJ2015.2.3.7