This is the fourth part of the series of articles for future mathematicians-programmers who have to solve problems associated with modeling oil production and development of engineering software in the field of oil production support.

Today we will talk about why field models are needed and how to build them. A model is an action plan that must necessarily be, and besides this, the expected and expected result of these actions.

Modeling, Forecast, Uncertainty

All listed in previous articles ( times , two , three ) physical It’s important to understand the effects, not just to know how the world works. Most likely they will have to be taken into account when building a model that knows how to correctly predict the future. Why should we be able to predict the future in oil production if the price of oil and coronavirus are still not predictable? But then, why and everywhere: to make the right decisions.


In the case of the field, we cannot directly observe what is happening underground between the wells. Almost everything that is accessible to us is tied to wells, that is, to rare spots on the vast expanses of swamps (all that we can measure is about 0.5% of the rock, we can only “guess” about the properties of the remaining 99.5%). These are the measurements taken at the wells when the well was being built. These are the readings of the instruments that are installed in the wells (bottomhole pressure, the proportion of oil water and gas in the output). And these are the measured and set parameters of the wells - when to turn on, when to turn off, at what speed to pump.

The right model is one that correctly predicts the future. But since the future has not come yet, and you want to understand whether the model is good, now they do it: they put in the model all the available actual information about the field, in accordance with the assumptions add their guesses about the unknown information (the catch phrase “two geologists - three opinions ”just about these conjectures) and they simulate the processes of filtering, pressure redistribution taking place underground, and so on. The model gives out which well performance indicators should have been observed, and they are compared with the actual observed performance. In other words, we are trying to build a model that reproduces the story.

In fact, you can cheat and just require the model to produce the data you need. But, firstly, it’s impossible to do this, and secondly, they’ll notice it anyway (experts in the very state agencies where the model should be taken).


If the model cannot reproduce the story, it is necessary to change its input data, but what? Actual data cannot be changed: this is the result of observation and measurement of reality - data from devices. Devices, of course, have their own inaccuracy, and the devices are used by people who can also screw up and lie, but the uncertainty of the actual data in the model is usually small. It is possible and necessary to change what has the greatest uncertainty: our assumptions about what is happening between the wells. In this sense, building a model is an attempt to reduce the uncertainty in our knowledge of reality (in mathematics, this process is known as solving an inverse problem, and inverse problems in our area - like bicycles in Beijing!).

If the model correctly enough reproduces the story, we have the hope that our knowledge of reality embedded in the model does not differ much from this very reality. Then and only then can we launch such a model for a forecast, in the future, and we will have more reasons to believe such a forecast.

What if it was possible to make not one, but several different models that all reproduce the story well enough, but at the same time give a different forecast? We have no choice but to live with this uncertainty, make decisions with that in mind. Moreover, having several models giving a range of possible forecasts, we can try to quantify the risks of making one decision or another, while having one model, we will be unjustifiably confident that everything will be as the model predicts.

Models in the life of the field

In order to make decisions in the process of developing a field, you need a holistic model of the entire field. Moreover, now without such a model it is impossible to develop a field at all: the government bodies of the Russian Federation require such a model.


It all starts with a seismic model that is created from seismic data. Such a model makes it possible to “see” three-dimensional surfaces underground — specific layers from which seismic waves are well reflected. It gives almost no information about the properties we need (porosity, permeability, saturation, etc.), but it does show how some layers bend in space. If you made a multilayer sandwich, and then somehow bent it (well, or someone sat on it), then you have every reason to believe that all layers are bent approximately the same. Therefore, we can understand how the layer cake was curved from various sediments attacking the ocean floor, even if we see only one of the layers on the seismic model, which, by a lucky chance, reflects seismic waves well. At this point, the data science engineers revived, because the automatic allocation of such reflecting horizons in the cube was done by the participants of one of our hackathons , is the classical pattern recognition problem.


Then exploratory drilling begins, and as the wells are drilled, instruments are lowered onto them that measure all sorts of different indicators along the wellbore, that is, they conduct well logging (geophysical surveys of wells). The result of such a study is well logging, i.e. a curve of a certain physical quantity, measured with a certain step along the entire wellbore. Different instruments measure different quantities, and trained engineers then interpret these curves to obtain meaningful information. One instrument measures the natural gamma radioactivity of a rock. Clays “fonit” are stronger, sandstone “fonit” is weaker - any interpreter knows this and identifies them on a logging curve: there are clays, here is a layer of sandstone, here is something in between. Another instrument measures the natural electrical potential between adjacent points that occurs when a drilling fluid enters a rock. A high potential indicates the presence of a filtration bond between the points of the reservoir, the engineer knows and confirms the presence of permeable rock. The third instrument measures the resistance of a rock-saturating fluid: salt water passes current, oil does not pass current, and it allows separating oil-saturated rocks from water-saturated rocks and so on.

At this place, the data science engineers revived again, because the input for this problem is simple numerical curves, and replacing the interpreter with some ML-model that can draw conclusions about the rock properties instead of the engineer in the form of a curve means to solve classic classification problem. It was only later that data scientists began to twitch their eyes when it became clear that some of these accumulated curves from old wells were only in the form of long paper footcloths.


In addition, during drilling, a core is taken out of the well - samples of a more or less intact (if lucky) and intact rock during drilling. These samples are sent to the laboratory, where they determine their porosity, permeability, saturation and all sorts of different mechanical properties.If it is known (and if this is done correctly) from what depth a specific core sample was taken, then when data from the laboratory arrives, it will be possible to compare what values ​​at this depth were shown by all geophysical instruments and what values ​​of porosity, permeability and The rock had saturation at this depth according to laboratory data from the core. Thus, it is possible to “shoot” the readings of geophysical instruments and then only based on their data, without a core, draw a conclusion about the rock properties that we need to build a model. The whole devil is in the details: the instruments do not measure exactly what they determine in the laboratory, but this is a completely different story.

Thus, having drilled several wells and conducted research, we can fairly confidently state which rock and with what properties is located where these wells were drilled. The problem is that we do not know what is happening between the wells. And here the seismic model comes to our aid.


At the wells, we know exactly what properties the rock has at what depth, but we don’t know how the rock layers observed at the wells propagate and bend between them. The seismic model does not allow you to accurately determine which layer is located at what depth, but it confidently shows the nature of the propagation and bending of all layers at once, the nature of the bedding. Then the engineers mark certain characteristic points in the wells, putting markers at a certain depth: at this depth at this depth is the roof of the formation, at this depth is the bottom. And the surface of the roof and the sole between the wells, roughly speaking, is drawn parallel to the surface that is seen in the seismic model. The result is a set of three-dimensional surfaces that span the space of interest to us, and of course we are interested in formations containing oil. What happened is called a structural model, because it describes the structure of the formation, but not its internal content. Structural model does not say anything about porosity and permeability, saturation and pressure inside the formation.


Then comes the discretization stage, in which the area of ​​space occupied by the field is divided into such a curved parallelepiped of cells in accordance with the bedding (the character of which is still visible on the seismic model!). Each cell of this curved box is uniquely determined by three numbers, I, J and K. All layers of this curved box are laid out according to the distribution of the layers, and the number of layers in K and the number of cells in I and J are determined by the detail that we can afford.

How much detailed rock information do we have along the wellbore, that is, vertically? As detailed as how often a geophysical instrument made measurements of its size when moving along the wellbore, that is, as a rule, every 20-40 cm, so each layer can be 40 cm or 1 m.

How detailed is our lateral information, i.e. away from the well? Not at all: away from the well, we have no information, so it makes no sense to divide into very small cells along I and J, and most often they are 50 or 100 m in both coordinates. Choosing the size of these cells is one of the important engineering challenges.


After the entire area of ​​space is divided into cells, the expected simplification is made: within each cell, the value of any of the parameters (porosity, permeability, pressure, saturation, etc.) is considered constant. Of course, this is not so in reality, but since we know that the sedimentation at the bottom of the sea went in layers, the rock's properties will change much more vertically than horizontally.


So, we have a grid of cells, each cell has its own (unknown to us) value of each of the important parameters that describe both the rock and its saturation.So far this grid is empty, but wells pass in some cells through which we passed with the device and obtained the values ​​of the curves of geophysical parameters. Interpretation engineers, using laboratory core studies, correlations, experience, and such a mother, convert the values ​​of the curves of geophysical parameters to the values ​​of the characteristics of the rock and saturating fluid that we need, and transfer these values ​​from the well to the grid cells through which this well passes. The result is a grid that in some places has values ​​in the cells, but in most cells there are still no values. The values ​​in all other cells will have to be imagined using interpolation and extrapolation. The geologist’s experience, his knowledge of how rock properties are usually distributed, allows you to choose the right interpolation algorithms and fill in their parameters correctly. But in any case, we have to remember that all this is speculation about the uncertainty that lies between the wells, and it is not for nothing that they say, once again, I will remind this common truth that two geologists will have three different opinions about the same deposit.

The result of this work will be a geological model - a three-dimensional curved parallelepiped, divided into cells, describing the structure of the field and several three-dimensional arrays of properties in these cells: most often these are arrays of porosity, permeability, saturation and the “sandstone” - “clay” attribute.


Then, hydrodynamic specialists take up the work. They can enlarge the geological model by combining several layers vertically and recounting the properties of the rock (this is called “upscaling”, and this is a separate difficult task). Then they add the rest of the necessary properties so that the hydrodynamic simulator can simulate what it will flow to: in addition to porosity, permeability, oil, water, gas saturation, it will be pressure, gas content, and so on. They will add wells to the model and enter information on them about when and in what mode they worked. You have not forgotten that we are trying to reproduce the story in order to have hope for a correct forecast? Hydrodynamics will take reports from the laboratory and add to the model the physicochemical properties of oil, water, gas and rock, all their dependencies (most often on pressure) and everything that happens, and this will be a hydrodynamic model, will be sent to a hydrodynamic simulator. He honestly calculates from which cell to which everything will flow at what point in time, gives out graphs of technological indicators for each well and carefully compares them with real historical data. The hydrodynamic engineer will take a sigh, looking at their discrepancy, and go to change all the uncertain parameters that he is trying to guess so that the next time he starts the simulator, he will get something close to the real data observed. Or maybe at the next start. Or maybe next time and so on.


The engineer preparing the model of surface arrangement will take those flow rates that the field will produce according to the simulation results and put them into its model, which will calculate which pipeline will have what pressure and whether the existing pipeline system can “digest” the field’s production: clean the produced oil, prepare the required volume of injected water and so on.

And finally, at the highest level, at the level of the economic model, the economist will calculate the flow of expenses for the construction and maintenance of wells, electricity for the operation of pumps and pipelines and the flow of income from the delivery of extracted oil to the pipeline system, multiply by the desired degree of discount coefficient and get the total NPV from a finished field development project.

The preparation of all these models, of course, requires the active use of databases for storing information, specialized engineering software that implements the processing of all input information and the actual modeling, that is, predicting the future from the past.

To build each of the above models, a separate software product is used, most often bourgeois, often almost uncontested and therefore very expensive.Such products have been developing for decades, and repeating their path with the help of a small institution is not an easy task. But dinosaurs were eaten not by other dinosaurs, but by small, hungry, purposeful ferrets. The important thing is that, as in the case of Excel, only 10% of functionality is needed for daily work, and our duplicates, like the Strugatsky’s, will be "only those who know that... - but they know how to do it well" just these 10%. In general, we are full of hope for which there are already certain reasons.

This article describes only one pillar way of the life cycle of the entire field model, and there is already a place for software developers to take a walk there, and competitors will have enough work with the current pricing models. In the next article, there will be spin-off “Rogue One" about some of the specific tasks of engineering modeling: hydraulic fracturing modeling and flexible tubing.

To be continued... .