One of the most interesting aspects of working in clean technology is the interaction between different sectors in the space - the Food-Energy-Water nexus, for example.
What is the food-energy-water nexus? Well, not being in the Star Trek universe or any other science fiction arena, we’re definitely not talking about black holes of energy where food and water go to die :)! What we are talking about when we talk about the nexus between different clean tech sectors is - how do these sectors interact with each other? How complex are the interactions and can the relationships between them be described? In the case of the energy-water-food nexus, we’re exploring the interactions between the energy, water and food sectors. For example, we need water to grow food and produce energy, but energy is also needed to pump out groundwater and to process food.
Just for fun, let’s take a look at some numbers on the food-energy-water nexus from the UN and FAO. Let’s start with the biggest one - agriculture. Agriculture uses approximately 70% of the world’s freshwater resources for crop production and close to 80% of global cropland is dependent on rainfall to grow crops. At the same time, food production and supply chain are responsible for approximately 30% of global energy consumption. However, when we turn to the energy sector, the numbers are no less daunting. Almost 75% of industrial water withdrawals are used for energy production - both conventional and renewable sources. And cooling power plants in developed countries in the US and Europe use between 40-50% of freshwater withdrawals. At the same time, treating water and wastewater to adequate standards, ensuring sufficient water is available through groundwater pumping or water transport all require energy.
As we see, each of these sectors are strongly dependent on each other and understanding the interactions and behaviors of these inter-related systems is extremely complex.
But that’s where data, models and machine learning really starts to make things interesting! The systems that we have are so complicated and inter-connected - and the questions that we need answered range from simpler ones like understanding the energy use for different parts of a water treatment plant to complicated ones like how increasing the acreage of almonds can impact water availability in California.
Earlier, scientists, companies and policy makers tried to use simplifying assumptions in order to be able to build and solve their models. However, today we can create a dizzying array of complicated, interacting models that use big data like satellite images, machine learning algorithms in combination with power plant models and all kinds of tools to generate and answer our questions about all these systems. And we can do that at scale - from answering the smaller, simpler questions about a single system to exploring the interactions between different systems - to working on “Digital Twins” of the Earth.
What do Google, Climate Corporation, early stage startups in farm robotics, and researchers trying to figure out how to feed the world sustainably have in common? They’re all grappling with one of the toughest challenges of working with natural systems - how do you work with data that is sparse, unevenly distributed and with systems that have so many connections and interactions with other systems? Before the advent of cheap sensors that are connected to phones, easily accessible satellite data and drones that can fly over fields quickly and inexpensively - scientists in companies and academia worked on developing plant and crop models that incorporated as many aspects of the farm and as much data as was available so that they could understand and predict what was likely to happen on the field. Understandably, the forecasts took some time to produce and as the models grew more complex, issues about how to estimate model parameters and the uncertainty associated with the resul
A mid-sized data center consumes around 300,000 gallons of water a day, or about as much as 1,000 U.S. households; About 20% of data centers in the United States already rely on watersheds that are under moderate to high stress from drought and other factors; Operating a data center often requires a tradeoff between water use and energy use; And in a survey of 122 data centers in the United States, only 16% or 20 utilities reported plans for managing water-related risks. As professionals working in the field, what can we do to solve this issue? One aspect is developing and using water models that can identify water risks at different scales - so that we can predict the risk to water supplies under a changing climate. A second is using machine learning to identify and optimize water use between all the stakeholders in the watershed - data centers, farmers, cities, other industries - so that biases and needs are brought out into the open and the key issues identified. A third, of cours
Our online community space is now open to anyone who has signed up for a free or paid course on our website! In addition to everyone who signed up for our cohort-based courses, we're now expanding it to all the members of our community. If you've already signed up for any of our courses, check your email for the invitation for the space. It's where we'll get together to talk about all things data science and clean technology related, discuss the latest research, network and make connections with other professionals in the sector. It's an invitation only , no bots and no trolls allowed space - so come on over! Here's where you can check out our courses and join our community !