Tag Archives: solar

This Climate Solution is a Sleeping Giant

A breakthrough technology evolution that can have an enormous, immediate impact

By Nick Mandala, for Positive Energy Action, republished by permission

Sometimes, the most effective and powerful solutions are right in front of us, yet somehow the potential is not immediately recognized.

This is a story about using available knowledge and technology to reduce climate warming GHG emissions to zero, while at the same time creating a new economic model for housing, transportation and, well, life on earth.

Two of the greatest challenges of our time (and one could argue, of all time) are climate change and the affordability crisis in housing worldwide.

Some data on housing, published by WeForum:

  • The housing crisis could impact 1.6 billion people by 2025, the World Bank says.
  • The world needs to build 96,000 new affordable homes every day to house the estimated 3 billion people who will need access to adequate housing by 2030, UN-Habitat says.

Superficially it would seem that these two challenges are in conflict; doesn’t it cost more to build zero carbon or even carbon negative homes? (negative carbon = produces more energy than it consumes)

What if a combination of existing methods, materials and technology could help solve both problems at once?

< R. Buckminster Fuller, (American architect, designer, inventor, and writer, best known for his geodesic domes) believed in the the ability of technological advancement to do “more and more with less and less until eventually you can do everything with nothing,” that is, an accelerating increase in the efficiency of achieving the same or more output (products, services, information, etc). >

”An accelerating increase in the efficiency of achieving the same or more output”

Before tackling the recipe for creating significantly more affordable housing, while at the same time battling climate change in a big way, it’s helpful to begin with an analogy from sustainable transport design.

Electric vehicles have been around, in primitive form, since the 1830s, nearly two decades before the oil industry officially began in the US.

But, in essence, it took 163 years before efficiency and battery technology were sufficiently developed to make transportation as cheap in an EV as in an ICE car.

( It can be argued that this accomplishment could have happened nearly a century sooner, if not for the threat it posed to the fossil fuel industry.)

The history of the ICE automobile is often one of ignoring efficiency until simply burning more fuel without limits became an issue. R. Buckminster Fuller (see above) designed a “Dymaxion” car in the early 1930s that could transport up to 11 passengers, reach speeds of up to 90 miles per hour, and ran 30 miles per gallon. The combined average mpg for cars and consumer trucks was sill less than 30 in 2011, nearly 80 years later.

The dawn of the EV era, finally

Tesla takes the efficiency of its vehicles very seriously and has made great strides in achieving long battery range and, with the model 3, increased affordability. The three main areas where EV efficiency can be increased are the materials (weight), the highly aerodynamic design (drag coefficient) and, of course, the battery design.

Aptera, a startup company that is targeting 2023 for initial mass production of its radically designed EV, is taking this focus on efficiency a step further in creating a solar powered car.

A great challenge to using solar panels on a passenger vehicle is the small surface-area that is available to mount the panels. For this reason every aspect of the design must be hyper-optimized.

Astoundingly, the Aptera is slated to release a model that can travel 1000 miles on a charge and, under ideal conditions, never need to be charged at all (100% self-charging via integrated solar panels).

Currently the biggest limitation is that the solar panels can only add 40 miles of range per day, meaning if you drive less than 40 miles per day on average you would never need to spend a cent plugging into grid power.

How do they do it? Special hyper-efficient PV panels, a drag coefficient nearly half of a Tesla Model 3 (1.3 vs. 2.3) and an aerodynamic design that makes it look as crazy as you can imagine. (Oh yea, and only 3 wheels)

If this story continues, and companies like Aptera are able to achieve additional incremental gains in efficiency to produce even better solar powered cars, transportation itself could become affordable at a level inconceivable in the current economic system.

Imagine buying a modestly priced vehicle (Aptera’s base model is currently priced at $25k) and never paying to charge it for the life of the car.

This is approaching an example of the “until eventually you can do everything with nothing” part of the quote above. Further gains are possible with continued design evolution.

What if a home, or housing community, could have “Aptera-like” performance?

Aptera formula:

  1. Solar powered
  2. Battery back up
  3. Hyper-efficient design to optimize 1+2

AM51 concept:

  1. Solar powered
  2. Battery back up (or geothermal, pumped hydro, etc + hyper-efficient heat pumps and other future tech appliances)
  3. Hyper-efficient design to optimize 1+2

At AM51 we are working to take decades of accumulated knowledge and use similar design principals, first pioneered by “Bucky” Fuller, in creating a complete “living system” for homes and communities.

The preconception that aerodynamic design and precision to create hyper-efficiencies is fine for cars, boats, aircraft, etc, but of little use in buildings / homes is where the communication challenge lies.

We use the term living system, because, like an EV, all the elements must be designed to work together with optimum performance in order to reach the twin goals of less than zero carbon emissions and achieving that at a price below current, traditionally built, homes and communities.

Also, a combination of the “core and shell” basically the equivalent of the body in a car, along with the power source (rooftop solar) each have to be hyper-efficient and work together at maximum performance.

Add to this eco-friendly insulation and HVAC systems, and something magical happens.

The EV design analogy is apt, also, because we incorporate batteries for backup and load management.

Where the analogy diverges is in the design of the building itself. Drag coefficient is less relevant (unless we create a flying house) but instead the thermal profile and material choices have a huge impact.

The thermal profile is the area where the greatest gains are possible. Traditional homes (and buildings generally) were never designed to take efficient energy use for climate control into account. (This would be the equivalent of driving a rectangular “block-car” EV -Hummer?- and watching your battery reserve disappear in minutes.)

Getting into the details of how exactly the thermal profile is achieved is beyond the scope of this article, however, what we can say is that the increased efficiency (compared to a home built with traditional methods) is achievable to between 80-94%.

In plain English, this is a measurement of how much less energy is needed to heat and cool the home, along with the standard average usage for typical residents (cooking, TVs, computers, etc).

Starting in the 70s, refined in the 90s, passive house standards are the underlying scientific foundation of our work in designing the ultimate thermal profile for homes.

This standard has been underappreciated and is often considered “expensive” which is only true if you look at only one aspect of the design in isolation (like triple pane windows, for example).

As part of a complete system, the real cost, not just in climate terms, is comparable, and, as discussed below, can be significantly less when every element is properly measured. Vastly less expensive and more efficient heat pumps or other new innovative HVAC systems already offset much of the added construction costs of superior materials and quantities.

Every home a power plant and a grid interactive citizen

Unlike an EV such as the Aptera, the roof area of an average sized home has space for a larger number of panels. Therefore, using standard current PV systems, an AM51 home, with an over 85% more efficient energy demand profile, can power itself using only a portion of the space available.

With a system that uses the entire available area, a significant amount of excess power is available to share with the public grid, in exchange for compensation.

All of this can be magnified, particularly in a community setting, once grid-interactive systems and net metering become standard, and laws adapt to maximize this potential.

In a nutshell, our goal is to create a system where a community functions as individual hyper-efficient homes, combined with shared solar power and backup.

The calculated benefits to this total system design are “beyond Aptera” in their potential impact at scale.

This comparison shows the real cost difference between a fully electric home built using traditional methods and an AM51 hyper-efficient home. The savings also reflect the higher energy costs for all-electric homes vs. cheap gas and oil. Many States are planning to require all electric single family home construction by 2023-2025.

Imagine a home that, once paid for via mortgage at a price at or below a traditional home, does not generate a cent in energy bills for up to 25 years…

…and, additionally, will generate monthly income, thus reducing the monthly payments, in some cases significantly.

All of this, while having a negative carbon footprint (more energy produced than consumed), and causing enormous reductions in GHG emissions at scale…

For many, utility bills are not the greatest concern or cost factor they focus on when imagining the cost of home ownership. But the potential – the freedom of a “grid-optional” lifestyle – and the incredible comfort, health and well-being attached to a perfectly climate controlled indoor environment – all this and many more benefits, once experienced, we believe will eventually make traditional home environments obsolete.

“You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.”

-R. Buckminster Fuller

Adaptation to hotter heat waves and “polar vortexes”, and other unexpected weather events that are now increasingly likely, is an important topic.

Having a living system that can be counted on to keep you warm in winter, cool in the raging summer heat, and all for zero dollars beyond the basic initial costs, must become a minimum standard as we go forward.

Fractalize™, the coup de gras of affordability for the grid-interactive, hyper-efficient home

So much for battling climate change through efficient design and synergistic systems.

In order to reach even greater affordability, for most even more important and extremely meaningful in getting homes to those in need, AM51 homes and communities will need a construction method to reduce actual costs even further.

Labor shortages in construction and supply chain issues for materials, are two major factors that are driving costs up.

Our completely unique pre-manufactured building system, Fractalize™, takes on both issues and more.

With modern, yet simple, computer and robotic assisted manufacturing of building blocks, optimized specifically for home construction, and made exclusively out of plant-based materials (wood and other) far less labor is required.

Building times are up to 10X faster and minimal assembly crews, with no heavy machinery, are all that’s needed.

Again, the specific details of the hybrid-deep-tech-low-tech system are too complex for this article, but the end result of the added layer of efficiency (in this case efficient execution of construction) can result, by our calculations, in up to 15% lower construction costs overall, with additional cost-benefits from the speed to market.

The automated Fractalize™ manufacturing system is planned for mini factories near each region where homes and communities are needed.

It can also be adapted to make use of cost benefits in non-OECD developing economies where using local supply-chain logistics and available labor can lower prices much more for those unique circumstances.

As for North America, imagine owning a home and having your home pay you, provide free energy for a quarter century, yet cost up to 20% less than a comparable home, built old-style!

This, combined with unprecedented healthy, comfortable living, convenience, and elegance will proclaim a new architectural century. And with an Aptera in the driveway you’ll never pay a cent for transportation or utilities for the life of your home and car. Bucky would be winking at the thought…

Microgrids and Distributed Solar Energy can Change our Future

Forces are aligning to accelerate the inevitable: a decentralized solution for power production and distribution

Today it is not easy to imagine a world where the centralized electrical grid has become unnecessary and its use is discontinued. In such a world, distributed energy systems, such as localized microgrid power plants, will have become the primary means of generating and distributing electricity.

In this world, communities and businesses would generate their own electricity using a combination of renewable energy sources, such as solar and wind power, and local energy storage systems. These microgrids would be connected to one another, forming a decentralized network of energy production and distribution.

As a result of this shift, the need for large centralized power plants and transmission lines would eventually be eliminated. Instead, energy would be generated closer to the point of consumption, reducing the enormous transmission losses and costs inherent in the current centralized systems. Additionally, because microgrids can operate independently, they would be less vulnerable to disruptions caused by natural disasters or cyber attacks, leading to increased energy security and reliability.

Furthermore, in this future scenario, individuals, communities and businesses would have greater control over their energy production and consumption. For example, excess energy generated by a community’s microgrid could be shared with neighboring communities or sold back to the grid or grids. Consumers would be able to choose from a variety of energy service providers, leading to more competition in the market and lower costs for consumers.

The use of renewable energy sources would also be the default in this world, as microgrids would allow for more efficient use of resources and can help to improve the overall reliability of the system. The integration of renewable energy sources as standard would also lead to an overall more rapid reduction in greenhouse gas emissions and other pollutants, which would have a cumulative positive impact on the environment and public health.

The concept of energy poverty would be greatly reduced as well, as microgrids can provide greater access to electricity and economic opportunities for marginalized communities. Furthermore, the shift towards microgrids would also promote local economic development and job creation, as sustainable microgrid power plants can be owned and operated by individuals, communities and small businesses.

Overall, the ultimate end scenario, when the centralized electrical grid has become unnecessary and its use is discontinued, would be one where communities and businesses have greater control over their energy production and consumption, leading to increased energy security and reliability, lower costs for consumers, and a reduction in income inequality and environmental impact.

There are massive forces already marshaling to protect the legacy systems

The potential for distributed power plants, virtual power storage systems, and sustainable production and consumption in close proximity to one another is unlimited.

However the existing systems, still based primarily on fossil fuel powered, centralized power plants and a huge grid network (3 in the US, to be precise) for distribution, with all the attendant problems, are already being targeted for massive expansion and renovation.

The US centralized power grid system, as we know it, is facing several major challenges as the demand for electricity continues to rise. One of the most significant problems is the aging infrastructure. Many of the transmission lines, substations and power plants that make up the grid were built decades ago and are in need of upgrading or replacement.

The likely time and cost to upgrade and repair the systems is extreme, measured in years and even decades and billions if not trillions of dollars.

This not only poses a risk to the reliability of the system but also leads to increased maintenance costs. Furthermore, as the population continues to grow and urbanize, the demand for electricity is increasing and the centralized grid is struggling to keep up with the rising demand.

“Because DERs are sited and in many cases controlled by non-utility actors, grid operators may not have as much insight into their performance as they would into a conventional power plant, requiring changes to operational and planning frameworks. In addition, many utilities’ business models rely on expanding sales from a more-centralized grid system”

ACEEE.org

    Another major problem is the increasing amount of intermittent renewable energy sources, such as solar and wind power, that are being integrated into the grid. These sources of energy are often located in remote areas, far from population centers, and the cost of transmitting this energy over long distances can be prohibitively expensive. Additionally, traditional centralized power plants are not well-suited to handle the fluctuations in power generation that can occur with renewable energy sources.

    Many solutions are ready to implement, once the will and message are aligned

    The case for rapid deployment and a shift to distributed power systems can be further augmented and buttressed by the potential of incorporating architectural designs that are 90% more energy efficient. One of the most effective methods of achieving this level of energy efficiency is through the use of ultra-high performance building design methods.

    A 90% greater energy efficiency can be established as an international standard for energy-efficient building design, and is already well established under the passive house standard. It is based on the principle of designing buildings to be highly insulated and airtight, with minimal thermal bridging, in order to reduce heat loss in the winter and heat gain in the summer.

    This leads to significant energy savings, as buildings require 90% less heating and cooling, and therefore far less energy is consumed. This method also includes the use of high-efficiency windows and doors to reduce heat loss and gain, and are designed to take advantage of natural light and heat from the sun.

    Ultra-high performance buildings, also known as net-zero energy buildings, take hyper-efficient design to the next level by incorporating renewable energy technologies such as solar panels, wind turbines, and geothermal systems. They are designed to generate as much, or more, energy than they consume, and when combined with energy storage systems, ultra-high performance buildings can be completely self-sufficient and produce more energy than they consume.

    When combined, these design methods can create buildings that are not only highly energy efficient but also comfortable, healthy, and resilient. These buildings are designed to maintain a consistent indoor temperature and humidity levels, providing a high level of indoor air quality, while also being able to withstand extreme weather conditions.

    In addition, setting a 90% more energy efficient standard would also greatly reduce the carbon footprint of the building sector, which is one of the largest contributors to greenhouse gas emissions. By reducing the energy consumption of buildings, by the maximum amount that established methods are capable of, we can accelerate the reduction of our dependence on fossil fuels, and far more effectively mitigate the effects of climate change.

    Changing direction to improve the odds of success, not just survival

    As the shift begins to take hold, and a distributed, hyper-efficient system begins to be the dominant direction, many benefits could arise. Gone would be the days of blackouts and brownouts caused by failures in the centralized grid. Microgrids are designed to operate independently, meaning that if there is a disruption in one area, the rest of the system can continue to function normally. This increased resilience also makes micro-grids more resistant to natural disasters and cyber attacks.

    With the decentralization of energy production, the cost of energy would decrease significantly. Consumers would no longer be at the mercy of large utility companies, and competition among small, community-based energy providers would drive prices down. Additionally, the excess energy generated by microgrids, and unneeded due to 90% greater efficiency in building designs, could be sold back to the grid, providing a significant source of income for the community.

    In this world, income inequality is reduced as access to electricity and economic opportunities is improved for marginalized communities. Remote or rural areas that were previously off the grid now have access to reliable and affordable energy, which improves living standards and reduces poverty.

    Furthermore, the shift towards microgrids can lead to a boost in local economic development and job creation. The ownership and operation of micro-grid power plants are often in the hands of individuals, small businesses and communities, leading to a stronger local economy.

    The future world where microgrids are the norm is not just a utopia for energy production and distribution, but it also has a positive impact on the environment. With the widespread use of renewable energy sources, greenhouse gas emissions have decreased dramatically and the air is cleaner.

    The goal of reaching a future state, a world where the centralized electrical grid is no longer necessary, and its use has been discontinued, is a world where energy is generated and distributed by a network of small, decentralized power plants and storage systems. This world is characterized by increased resilience, reduced costs, improved access to electricity, reduced income inequality, local economic development, and job creation and a cleaner environment. It’s a future worth striving for.

    References:

    “Microgrids: An Overview” by the National Renewable Energy Laboratory (NREL)

    “The Future of the Electric Grid” by the Department of Energy

    “The Microgrid: An Innovative Solution for a Sustainable Energy System” by the International Journal of Electrical Power and Energy Systems

    “Microgrids and the Future of Energy” by the Harvard Business Review

    “Passive House Standard” by the Passive House Institute US (PHIUS)

    “The Passive House Standard” by the Passive House Institute (PHI)

    “Net-zero Energy Buildings” by the National Renewable Energy Laboratory (NREL)

    “Passive House and Net Zero Energy Buildings” by the International Energy Agency (IEA)

    Please help keep us publishing the content you love

    Lynxotic may receive a small commission based on any purchases made by following links from this page

    Virtual Power Plants Could be the Future of Distributed Energy

    More grid power failures are likely: a distributed network is the only solution

    If you have heard about the concept of a VPP, it is most likely that you read about a Tesla Virtual Power Plant. A virtual power plant (VPP) is a system that uses a network of decentralized energy resources, such as solar panels, wind turbines, and energy storage systems, to generate electricity.

    These resources are connected and controlled through a central management system, which allows them to operate as a single, coordinated entity.

    The goal of a VPP is to provide a reliable and cost-effective source of electricity by leveraging the collective output of the connected energy resources.

    Tesla has been working on the concept going back as far as 2015, when they first began producing battery back up systems for solar.

    Tesla’s home system, called Powerwall, and the Megapack, first offered in 2019, which is a massive 3 MWh energy storage product, are the best known backup systems for solar panel systems.

    More recently companies such as Swell Energy are working together with utilities to operate a “behind the meter” virtual power plant systems that are able to manage residential solar installations to ensure that there are no outages and that the maximum financial benefit is available for the power generated.

    VPPs can be used to provide electricity to a specific location, such as a neighborhood or a campus, or they can be connected to the grid and used to generate electricity for a larger area.

    They can also be used to support the integration of renewable energy sources into the grid, by providing a flexible and responsive source of electricity that can be dispatched as needed to meet changing demand.

    VPPs can be beneficial in a number of ways. They can help to reduce reliance on fossil fuels, which can help to reduce greenhouse gas emissions and mitigate the impacts of climate change.

    They can also help to lower energy costs by using locally-generated renewable energy, and they can help to improve the reliability of the electricity supply by providing a distributed source of electricity that is not reliant on a single power plant or transmission line.

    The many benefits of these systems are only now beginning to emerge – with greater cooperation between the government, regulations, utilities and individual home owners the potential for a more resilient grid and more secure, sustainable energy for communities are virtually unlimited.

    Please help keep us publishing the content you love

    Lynxotic may receive a small commission based on any purchases made by following links from this page

    Meet the power plant of the future: Solar + battery hybrids are poised for explosive growth

    By pairing solar power and battery storage, hybrids can keep providing electricity after dark.

    Joachim Seel, Lawrence Berkeley National Laboratory; Bentham Paulos, Lawrence Berkeley National Laboratory, and Will Gorman, Lawrence Berkeley National Laboratory

    America’s electric power system is undergoing radical change as it transitions from fossil fuels to renewable energy. While the first decade of the 2000s saw huge growth in natural gas generation, and the 2010s were the decade of wind and solar, early signs suggest the innovation of the 2020s may be a boom in “hybrid” power plants.

    A typical hybrid power plant combines electricity generation with battery storage at the same location. That often means a solar or wind farm paired with large-scale batteries. Working together, solar panels and battery storage can generate renewable power when solar energy is at its peak during the day and then release it as needed after the sun goes down.

    A look at the power and storage projects in the development pipeline offers a glimpse of hybrid power’s future.

    Our team at Lawrence Berkeley National Laboratory found that a staggering 1,400 gigawatts of proposed generation and storage projects have applied to connect to the grid – more than all existing U.S. power plants combined. The largest group is now solar projects, and over a third of those projects involve hybrid solar plus battery storage.

    While these power plants of the future offer many benefits, they also raise questions about how the electric grid should best be operated.

    Why hybrids are hot

    As wind and solar grow, they are starting to have big impacts on the grid.

    Solar power already exceeds 25% of annual power generation in California and is spreading rapidly in other states such as Texas, Florida and Georgia. The “wind belt” states, from the Dakotas to Texas, have seen massive deployment of wind turbines, with Iowa now getting a majority of its power from the wind.

    This high percentage of renewable power raises a question: How do we integrate renewable sources that produce large but varying amounts of power throughout the day?

    Joshua Rhodes/University of Texas at Austin.

    That’s where storage comes in. Lithium-ion battery prices have rapidly fallen as production has scaled up for the electric vehicle market in recent years. While there are concerns about future supply chain challenges, battery design is also likely to evolve.

    The combination of solar and batteries allows hybrid plant operators to provide power through the most valuable hours when demand is strongest, such as summer afternoons and evenings when air conditioners are running on high. Batteries also help smooth out production from wind and solar power, store excess power that would otherwise be curtailed, and reduce congestion on the grid.

    Hybrids dominate the project pipeline

    At the end of 2020, there were 73 solar and 16 wind hybrid projects operating in the U.S., amounting to 2.5 gigawatts of generation and 0.45 gigawatts of storage.

    Today, solar and hybrids dominate the development pipeline. By the end of 2021, more than 675 gigawatts of proposed solar plants had applied for grid connection approval, with over a third of them paired with storage. Another 247 gigawatts of wind farms were in line, with 19 gigawatts, or about 8% of those, as hybrids.

    The amount of proposed solar, storage and wind power waiting to hook up to the grid has grown dramatically in recent years, while coal, gas and nuclear have faded. Lawrence Berkeley National Laboratory

    Of course, applying for a connection is only one step in developing a power plant. A developer also needs land and community agreements, a sales contract, financing and permits. Only about one in four new plants proposed between 2010 and 2016 made it to commercial operation. But the depth of interest in hybrid plants portends strong growth.

    In markets like California, batteries are essentially obligatory for new solar developers. Since solar often accounts for the majority of power in the daytime market, building more adds little value. Currently 95% of all proposed large-scale solar capacity in the California queue comes with batteries.

    5 lessons on hybrids and questions for the future

    The opportunity for growth in renewable hybrids is clearly large, but it raises some questions that our group at Berkeley Lab has been investigating.

    Here are some of our top findings:

    • The investment pays off in many regions. We found that while adding batteries to a solar power plant increases the price, it also increases the value of the power. Putting generation and storage in the same location can capture benefits from tax credits, construction cost savings and operational flexibility. Looking at the revenue potential over recent years, and with the help of federal tax credits, the added value appears to justify the higher price.
    • Co-location also means tradeoffs. Wind and solar perform best where the wind and solar resources are strongest, but batteries provide the most value where they can deliver the greatest grid benefits, like relieving congestion. That means there are trade-offs when determining the best location with the highest value. Federal tax credits that can be earned only when batteries are co-located with solar may be encouraging suboptimal decisions in some cases.
    • There is no one best combination. The value of a hybrid plant is determined in part by the configuration of the equipment. For example, the size of the battery relative to a solar generator can determine how late into the evening the plant can deliver power. But the value of nighttime power depends on local market conditions, which change throughout the year.
    • Power market rules need to evolve. Hybrids can participate in the power market as a single unit or as separate entities, with the solar and storage bidding independently. Hybrids can also be either sellers or buyers of power, or both. That can get complicated. Market participation rules for hybrids are still evolving, leaving plant operators to experiment with how they sell their services.
    • Small hybrids create new opportunities: Hybrid power plants can also be small, such as solar and batteries in a home or business. Such hybrids have become standard in Hawaii as solar power saturates the grid. In California, customers who are subject to power shutoffs to prevent wildfires are increasingly adding storage to their solar systems. These “behind-the-meter” hybrids raise questions about how they should be valued, and how they can contribute to grid operations.

    Hybrids are just beginning, but a lot more are on the way. More research is needed on the technologies, market designs and regulations to ensure the grid and grid pricing evolve with them.

    While questions remain, it’s clear that hybrids are redefining power plants. And they may remake the U.S. power system in the process.

    Joachim Seel, Senior Scientific Engineering Associate, Lawrence Berkeley National Laboratory; Bentham Paulos, Affiliate, Electricity Markets & Policy Group, Lawrence Berkeley National Laboratory, and Will Gorman, Graduate Student Researcher in Electricity Markets and Policy, Lawrence Berkeley National Laboratory

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Related:


    Check out Lynxotic on YouTube

    Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

    Lynxotic may receive a small commission based on any purchases made by following links from this page

    A ‘100% renewables’ target might not mean what you think it means. An energy expert explains

    In the global effort to transition from fossil fuels to clean energy, achieving a “100% renewables” electricity system is considered ideal.

    Some Australian states have committed to 100% renewable energy targets, or even 200% renewable energy targets. But this doesn’t mean their electricity is, or will be, emissions free.

    Electricity is responsible for a third of Australia’s emissions, and making it cleaner is a key way to reduce emissions in other sectors that rely on it, such as transport.

    So it’s important we have clarity about where our electricity comes from, and how emissions-intensive it is. Let’s look at what 100% renewables actually implies in detail.

    Is 100% renewables realistic?

    Achieving 100% renewables is one way of eliminating emissions from the electricity sector.

    It’s commonly interpreted to mean all electricity must be generated from renewable sources. These sources usually include solar, wind, hydro, and geothermal, and exclude nuclear energy and fossil fuels with carbon capture and storage.

    But this is a very difficult feat for individual states and territories to try to achieve.

    The term “net 100% renewables” more accurately describes what some jurisdictions — such as South Australia and the ACT — are targeting, whether or not they’ve explicitly said so.

    These targets don’t require that all electricity people use within the jurisdiction come from renewable sources. Some might come from coal or gas-fired generation, but the government offsets this amount by making or buying an equivalent amount of renewable electricity.

    A net 100% renewables target allows a state to spruik its green credentials without needing to worry about the reliability implications of being totally self-reliant on renewable power.

    So how does ‘net’ 100% renewables work?

    All east coast states are connected to the National Electricity Market (NEM) — a system that allows electricity to be generated, used and shared across borders. This means individual states can achieve “net 100% renewables” without the renewable generation needing to occur when or where the electricity is required.

    Take the ACT, for example, which has used net 100% renewable electricity since October 2019.

    The ACT government buys renewable energy from generators outside the territory, which is then mostly used in other states, such as Victoria and South Australia. Meanwhile, people living in ACT rely on power from NSW that’s not emissions-free, because it largely comes from coal-fired power stations.

    This way, the ACT government can claim net 100% renewables because it’s offsetting the non-renewable energy its residents use with the clean energy it’s paid for elsewhere.

    SA’s target is to reach net 100% renewables by the 2030s. Unlike the ACT, it plans to generate renewable electricity locally, equal to 100% of its annual demand.

    At times, such as especially sunny days, some of that electricity will be exported to other states. At other times, such as when the wind drops off, SA may need to rely on electricity imports from other states, which probably won’t come from all-renewable sources.

    So what happens if all states commit to net 100% renewable energy targets? Then, the National Electricity Market will have a de-facto 100% renewable energy target — no “net”.

    That’s because the market is one entire system, so its only options are “100% renewables” (implying zero emissions), or “less than 100% renewables”. The “net” factor doesn’t come into it, because there’s no other part of the grid for it to buy from or sell to.

    How do you get to “200% renewables”, or more?

    It’s mathematically impossible for more than 100% of the electricity used in the NEM to come from renewable sources: 100% is the limit.

    Any target of more than 100% renewables is a different calculation. The target is no longer a measure of renewable generation versus all generation, but renewable generation versus today’s demand.

    Australia could generate several times more renewable energy than there is demand today, but still use a small and declining amount of fossil fuels during rare weather events. Shutterstock

    Tasmania, for example, has legislated a target of 200% renewable energy by 2040. This means it wants to produce twice as much renewable electricity as it consumes today.

    But this doesn’t necessarily imply all electricity consumed in Tasmania will be renewable. For example, it may continue to import some non-renewable power from Victoria at times, such as during droughts when Tasmania’s hydro dams are constrained. It may even need to burn a small amount of gas as a backup.

    This means the 200% renewable energy target is really a “net 200% renewables” target.

    Meanwhile, the Greens are campaigning for 700% renewables. This, too, is based on today’s electricity demand.

    In the future, demand could be much higher due to electrifying our transport, switching appliances from gas to electricity, and potentially exporting energy-intensive, renewable commodities such as green hydrogen or ammonia.

    Targeting net-zero emissions

    These “more than 100% renewables” targets set by individual jurisdictions don’t necessarily imply all electricity Australians use will be emissions free.

    It’s possible — and potentially more economical — that we would meet almost all of this additional future demand with renewable energy, but keep some gas or diesel capacity as a low-cost backstop.

    This would ensure continued electricity supply during rare, sustained periods of low wind, low sun, and high demand, such as during a cloudy, windless week in winter.

    The energy transition is harder near the end — each percentage point between 90% and 100% renewables is more expensive to achieve than the previous.

    That’s why, in a recent report from the Grattan Institute, we recommended governments pursue net-zero emissions in the electricity sector first, rather than setting 100% renewables targets today.

    For example, buying carbon credits to offset the small amount of emissions produced in a 90% renewable NEM is likely to be cheaper in the medium term than building enough energy storage — such as batteries or pumped hydro dams — to backup wind and solar at all times.

    The bottom line is governments and companies must say what they mean and mean what they say when announcing targets. It’s the responsibility of media and pundits to take care when interpreting them.

    This article is by James Ha, Associate, Grattan Institute and republished from The Conversation under a Creative Commons license. Read the original article.

    Related:


    Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

    Lynxotic may receive a small commission based on any purchases made by following links from this page

    Biden bets a million barrels a day will drive down soaring gas prices – what you need to know about the Strategic Petroleum Reserve

    Several sites, such as one near Freeport, Texas, store the hundreds of million of barrels in the United States’ Strategic Petroleum Reserve. Department of Energy via AP

    Scott L. Montgomery, University of Washington

    The Biden administration on March 31, 2022, said it plans to release an unprecedented 180 million barrels of oil from the U.S. Strategic Petroleum Reserve to combat the recent spike in gas and diesel prices. About a million barrels of oil will be released every day for up to six months.

    If all the oil is released, it would represent almost one-third of the current volume of the Strategic Petroleum Reserve. It follows a release of 30 million barrels in early March, a large withdrawal until the latest one.

    But what is the Strategic Petroleum Reserve, why was it created, and when has it been used? And does it still serve a purpose, given that the U.S. exports more oil and other petroleum products than it imports?

    As an energy researcher, I believe considering the reserve’s history can help answer these questions.

    Origins of the reserve

    Congress created the Strategic Petroleum Reserve as part of the Energy Policy and Conservation Act of 1975 in response to a global oil crisis.

    Arab oil-exporting states led by Saudi Arabia had cut supply to the world market because of Western support for Israel in the 1973 Yom Kippur War. Oil prices quadrupled, resulting in major economic damage to the U.S. and other countries. This also shook the average American, who had grown used to cheap oil.

    The oil crisis caused the U.S., Japan and 15 other advanced countries to form the International Energy Agency in 1974 to recommend policies that would forestall such events in the future. One of the agency’s key ideas was to create emergency petroleum reserves that could be drawn on in case of a severe supply disruption.

    The map shows the locations of the oil held in the Strategic Petroleum Reserve. Department of Energy

    The Energy Policy and Conservation Act originally stipulated the reserve should hold up to 1 billion barrels of crude and refined petroleum products. Though it has never reached that size, the U.S. reserve is the largest in the world, with a maximum volume of 714 million barrels. The cap was previously set at 727 million barrels.

    As of March 25, 2022, the reserve contained about 568 million barrels.

    Oil in the reserve is stored underground in a series of large underground salt domes in four locations along the Gulf Coast of Texas and Louisiana, and is linked to major supply pipelines in the region.

    Salt domes, formed when a mass of salt is forced upward, are a good choice for storage since salt is impermeable and has low solubility in crude oil. Most of the storage sites were acquired by the federal government in 1977 and became fully operational in the 1980s.

    History of drawdowns

    In the 1975 act, Congress specified that the reserve was intended to prevent “severe supply interruptions” – that is, actual oil shortages.

    Over time, as the oil market has changed, Congress expanded the list of reasons for which the Strategic Petroleum Reserve could be tapped, such as domestic supply interruptions due to extreme weather.

    Prior to March 2022, about 280 million barrels of crude oil had been released since the reserve’s creation, including a 50 million release that began in November 2021.

    There have only been three emergency releases in the reserve’s history. The first was in 1991 after Iraq invaded Kuwait the year before, which resulted in a sharp drop in oil supply to the world market. The U.S. released 34 million barrels.

    The second release, of 30 million barrels, came in 2005 after Hurricanes Rita and Katrina knocked out Gulf of Mexico production, which then comprised about 25% of U.S. domestic supply.

    The third was a coordinated release by the International Energy Agency in 2011 as a result of supply disruptions from several oil-producing countries, including Libya, then facing civil unrest during the Arab Spring. In all, the agency coordinated a release of 60 million barrels of crude, half of which came from the U.S.

    In addition, there have been 11 planned sales of oil from the reserve, mainly to generate federal revenue. One of these – the 1996-1997 sale to reduce the federal budget deficit – seemed to serve political ends rather than supply-related ones.

    A better way to avoid pain at the pump

    President Joe Biden’s November decision to tap the reserve was also seen as political by Republicans because there was no emergency shortage of supply at that time.

    Similarly, the latest historic release of 180 million barrels could also be seen as serving a political purpose – in an election year, no less. But I believe it also seems perfectly legitimate in terms of fulfilling the Strategic Petroleum Reserve’s original purpose: reducing the negative impacts of a major oil price shock.

    Though the U.S. is today a net petroleum exporter, it continues to import as much as 8.2 million barrels of crude oil every day.

    [Over 150,000 readers rely on The Conversation’s newsletters to understand the world. Sign up today.]

    But in my view, the best way to avoid the pain of oil price shocks is to lower oil demand by reducing global carbon emissions – rather than mainly relying on releases from the reserve.

    This is an updated version of an article originally published on Nov. 24, 2021.

    Scott L. Montgomery, Lecturer, Jackson School of International Studies, University of Washington

    This article is republished from The Conversation under a Creative Commons license. Read the original article.


    Subscribe to our newsletter for all the latest updates directly to your inBox.

    Lynxotic may receive a small commission based on any purchases made by following links from this page

    Article from 1912 linked Coal to Climate Change

    Above: Photo Collage / Lynxotic

    We were given a warning about fossil fuels. Now we’re living it…

    An image from an old newspaper was shared on social media from the account “Historical Photos” and titled “Coal Consumption Affecting Climate”.

    The image of the clipping went viral very likely because of the amazing date it was published on, August 14th, 1912. The image has since been seen by over 17k people on Twitter and over 6k on Facebook.

    Understandably there were many people that questioned whether the article was actually authentic or merely fabricated. The implication is a big one, that scientists have known for over a century the negative impacts coal consumption has on the climate (and haven’t done much to change it).

    https://twitter.com/Iearnhistory/status/1425673866609762306?s=20

    As per USA Today, the article is, in fact, real and has been authenticated by Snopes. The text originated from a March 1912 report in the Popular Mechanics magazine titled “Remarkable Weather of 1911: The Effect of the Combustion of Coal on the Climate – What Scientists Predict for the Future.” Similar phrasing was used in the New Zealand newspaper published on August 14, 1912 which is from the viral image.

    Read at:


    Find books on Politics and many other topics at our sister site: Cherrybooks on Bookshop.org

    Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac and subscribe to our newsletter.

    Lynxotic may receive a small commission based on any purchases made by following links from this page

    A ‘Ring of Fire’ Solar Eclipse Starts Thursday Dawn on east Coast

    Above: Photo Credit / Bryan Goff / UnSplash

    Look to the sky for a solar show that will create a stunning glow…

    Stargazers and skywatchers are in for another treat, which come about two weeks after the lunar eclipse, also referred to as the “Super Flower Blood Moon”. Tonight and into Thursday, June 10th, an annular solar eclipse called “ring of fire” will be visible. Any discussion of all things lunar, blood moons and eclipses would certainly be congruent with a taste of the astrological perspective.

    Unfortunately this time around, no parts of the United States will get to see the full eclipse, however some metropolitan areas like Toronto, Philadelphia and New York will be able to view a partial eclipse a little after the sunrise on Thursday morning.

    Getting to see a partial eclipse looks kind of like the sun has a portion taken out of it. In total, this eclipse will last around 1 2/3 hrs (approximately 100 minutes) as it starts at sunrise in Ontario, Canada.

    If you aren’t exactly clear on what a solar eclipse is, an annular eclipse occurs when the Moon is farthest from Earth. And because the Moon is far away it appears smaller. The Moon does not block the entire view of the Sun and thus creates the appearance of a ring around the Moon.

    Check out additional detailed information and maps about the eclipse operated by retired NASA astrophysicist Fred Espenakly.

    The word annular comes from the Latin word for ring. Since the Moon covers the sun’s center and what is left forms a ring, hence the name “ring of fire”.

    If you are one of the lucky folks situated along the East Coast and Upper Midwest and want to catch a glimpse at the partial eclipse, it is strongly recommended to use solar eclipse glasses and to not look directly into the sun as it may cause permanent damage to your eyes.

    Don’t fret if you aren’t able to experience the upcoming solar eclipse. This summer we have a couple more opportunities to gaze above. There is set to be a Supermoon June 24, a Meteor Shower on July 28, and the Blue Moon come August 22.

    We have a couple years until the next total solar eclipse in the United States, in April 8, 2024, weather permitting.

    Related Articles:


    Find books on Aerospace and Futuristic Innovations and many other topics at our sister site: Cherrybooks on Bookshop.org

    Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

    Lynxotic may receive a small commission based on any purchases made by following links from this page