MiningMath

MiningMath

Loading...

Math Optimization models that integrate multiple business’ areas

Tutorials

Beginners guide

Run your first project

You can check a sequence of pages to learn how to run your first project with our Getting Started training. From installation process and formatting your model files up to the long-term planning of your project.

Click Here

Must-Read Articles

In order to take the maximum of MiningMath’s Optimization we recommend this flow through our Knowledge Base. It will guide you step-by-step in order to integrate multiple business’ areas and to improve your strategic analysis through risk assessments unconstrained by step-wise processes. 

Set up and first run

  1. Quick Check: Here you’ll have all the necessary instructions to install, activate and run MiningMath.

  2. How to run a scenario: Once everything is ready, it’s time to run your first scenario with MiningMath so you can familiarize with our technology!

Find new results

  1. Playing with pre-defined scenarios: Each change in a scenario opens a new world of possibilities, therefore, it’s time to understand a little more about and see it in practice, playing with pre-defined scenarios.

  2. Decision Trees: Decision Trees provide you a detailed broad view of your project, allowing you to plan your mining sequence by analyzing every possibility in light of constraints applied to each scenario, which options are more viable and profitable to the global project, as well as how these factors impact the final NPV.

Understand the technology in depth

  1. Current best practices: Here we go through the modern technology usually employed by other mining packages. It is important to understand these in order to comprehend MiningMath differentials.

  2. MiningMath uniqueness: Now that you’ve practiced the basics of MiningMath, and understand how other mining packages work, it’s time to get deep into the theory behind the MiningMath technology.

  3. Interface Overview: It’s time understand our interface overview with detailed information about every screen and constraints available in MiningMath. Home page, Model tab, Scenario tab, and Viewer for a better understanding of the possibilities.

Using and validating your data

  1. Formatting the Block Model: Learn how to format your block model data and use it in MiningMath.

  2. Importing the block model: Go through the importation process and to proper configure your data.

  3. Economic Values: MiningMath does not require pre-defined destinations ruled by an arbitrary cut-off grade. Instead, the software uses an Economic Value for each possible destination and for each block. After your data is formatted and imported into MiningMath, you may build your Economic Value for each possible destination.

  4. Data Validation: Once your data is set, it’s time to validate it by running MiningMath with a bigger production capacity than the expected reserves. Thus, you will get and analyze results faster.

  5. Constraints Validation: Continuing the validation, start to add the first constraints related to your project so that you can understand its maximum potential.

Improve your results

  1. Integrated Workflow: Each project has its own characteristics and MiningMath allows you to choose which workflow fits best in your demand and to decide which one should be used.

  2. Super Best Case: In the search for the upside potential for the NPV of a given project, this setup explores the whole solution space without any other constraints but processing capacities, in a global multi-period optimization fully focused on maximizing the project’s discounted cash flow.

  3. Optimized Pushbacks: Identify timeframe intervals in your project, so that you can work with group periods before getting into a detailed insight. This strategy allows you to run the scenarios faster without losing flexibility or adding dilution on the optimization, which happens when you reblock.

  4. Optimized Schedules: Consider your real production and explore scenarios to the most value in terms of NPV.

  5. Short-term Planning: Now that you built the knowledge about your project based on the previous steps, it is time to start the integration between long and short-term planning in MiningMath. You may also optimize the short-term along with the long-term using different timeframes.

Export your results

  1. Exporting Data: After running your scenarios, you can export all data. Results are automatically exported to CSV files to integrate with your preferred mining package.

In-Depth MiningMath

This tutorial provides a detailed guidance to the pages in the knowledge base for new MiningMath users. A shorter tutorial can be found here with a set of must read articles. In this tutorial, a larger number of pages is contextualized and recommended for those with no previous experience using MiningMath but who wish to gain a more advanced knowledge.

Software requirements

  1. Quick check: Verify if your computer has all the minimum/recommended requirements for running the software.

  2. Put it to run: Here you’ll have all the necessary instructions to install, activate and run MiningMath.

Set up the block model

The next step after installation is to understand the home page interface and import your project data. The following pages go over these in detail.

  1. Home page: MiningMath automatically starts on this page. It depicts your decision trees, recent projects and model information.

  2. Import your block model: import your csv data, name your project, set fields and validation.

  3. Modify the block model: this window aims to help you to modify your block model accordingly with what is required for your project and also allows you to “Export” the block model to the CSV format to be used with any other software.

  4. Calculator: calculate and create new fields by manipulating your project inside MiningMath.

Handling unformatted data

If you don’t have a block model ready to be imported you might want to create a new one. The following pages can guide you through this process.

Define the scenario and run

Once you have started your block model defined, there are several options to set up your project’s parameters before running a scenario.

  1. Scenario tab: set densities, economic parameters, slope angles, stockpiles, add/remove processes and dumps, production inputs, geometric inputs and so on.

  2. Save as: save the scenario's configuration once it has been configured.

  3. Run: the Run tab is the last step before running your project’s optimization. Change the scenario name, set a time limit, and set up results files.

Results

After running your scenario it is important to analyze and understand the given results.

  1. Output files and 3D viewer: by default, MiningMath generates an Excel report summarizing the main results of the optimization. It also creates outputs of mining sequence, topography, and pit surfaces in csv format so that you can easily import them into other mining packages. The 3D viewer enables a view of your model from different angles. 

  2. Export model: export your model as a csv file. This can be used in new scenarios or imported in other mining packages.

Extensive set-up

MiningMath offers a lot of customization. You might use pre-defined scenarios to learn with standard parameters. Otherwise, the following pages of the knowledge base detail several important parameters that might need to be fine tuned in your project.

Advanced content

Complex projects might need advanced configurations or advanced knowledge in certain topics. The following pages cover some subjects considered advanced in our knowledge base.

Theory

In order to understand the theory behind MiningMath’s algorithm, a set of pages is provided to describe mathematical formulations, pseudo-code, and any rationale to justify the software design.

Workflows

ManingMath acknowledges and supports different workflows. This knowledge base provides a set of articles aimed at showing how MiningMath can be integrated into other workflows or have its results used by different mining packages. 

Getting Started

Quick Check

System requirements

The only mandatory requirement for using MiningMath is a 64-bits system. Other minimum requirements are listed further:

  1. Windows 10

  2. 64-bits system (mandatory)

  3. 110 MB of space (installation) + additional space for your projects' files.

  4. Processor: processors above 2.4 GHz are recommended to improve your experience.

  5. Memory: at least 8 GB of RAM is required. 16 GB of RAM or higher is recommended to improve your experience.

  6. Microsoft Excel.

  7. OpenGL 3.2 or above. Discover yours by downloading and running the procedure available here.

  8. Visual C++ Redistributable: Installation of Visual C++ Redistributable is necessary to run this software.

Recommended Hardware

Memory should be a higher priority when choosing the machine in which MiningMath will be run on. Here’s a list of priority upgrades to improve performance with large scale datasets: 

  1. Higher Ram

  2. Higher Ram frequency

  3. Higher processing clock

Common Issues

Insufficient memory

As previously presented, RAM should be one of the most important components to prioritize when selecting a computer to run MiningMath, especially because Windows alone consumes a significant amount of memory.

However, if you encounter an insufficient memory warning or a sudden crash while using MiningMath, there are some recommendations you can consider:

1. Memory Upgrade: If possible, this is the best solution to enhance efficiency. The characteristics to observe are listed in the previous item, “Recommended Hardware.” Based on our experience with more complex projects, 64 GB is usually sufficient for nearly all cases.

2. Free Up Memory: Consider closing other applications that are consuming the computer’s RAM while MiningMath is running.

3. Increase Windows Virtual Memory: This procedure involves allocating disk space to be used as RAM. To perform this procedure, we recommend this tutorial.

4. Reblock: If none of these options work, reblocking can be considered to reduce the size of the model. Check more details here.

Extra: In exceptional cases, when working with boxes, it may be viable to manipulate the block coordinates to bring them closer together, creating a smaller model box.

Put It to Run!

Installing, Activating and Running

Installing and activating MiningMath is quick and straightforward. All you need to do is follow the setup wizard and have an internet connection to activate your license. 

Video 1: MiningMath installation process.

Activating Your License

To activate your license, you will need to: 

  1. Open MiningMath (it will open automatically after the installation, but you can open it manually afterwards).

  2. On the left menu, click License.

  3. Select the field "I have an activation code" and paste the License Code provided by MiningMath.

  4. Click "Activate license".

Opening an old project

If you need to open an old project, just follow these steps:

  1. Open MiningMath (it will open automatically after the installation, but you can open it manually afterwards).

  2. On the left menu, click on Open Project.

  3. Search for the folder in which you saved your old project.

  4. Select the ".ssprj" file.

  5. Click on "Open" and it will show up on the "Recent Projects"

  6. Now you can open it!

The images below illustrate this process:

NOTE

MiningMath’s licensing method demands an internet connection. 

Optimizing Scenarios

Play with the predefined scenarios

MiningMath allows you to learn with each scenario by providing standard parameters which simulate some common constraints that a mining company may face. Standard scenarios are listed and described below so you can identify the main changes made within the “Overview” tab.

The ultimate goal of this practice is to prepare you to build Decision Trees, which allow you to organize scenarios in order to understand how variables influence one another and, consequently, how these variables determine the final NPV.

Figure 1: Scenarios on the Home Page

Market Conditions Decision Tree

1) BaseCase

The Base Case consists of the initial scenario, with a uniform production capacity and without sum, average or surface mining limits.

2) BaseCase-RampUp

While the base case considers a uniform production capacity, the BaseCase-RampUp scenario offers the possibility to vary the levels of production within the different timespans. We have an initial production capacity of 10Mton on the first 2 periods; 20 Mton on periods 3 and 4; and 30 Mton from period 5 until the end of the mine’s lifetime, with a total movement constraint of 30, 60, and 80 Mton, considering the increase of production within the time-frames mentioned.

Figure 3: BaseCase-RampUp

3) PriceUp and PriceDown

Scenarios “PriceUp” and “PriceDown” differ in relation to the basic scenario in the economic value used for the calculation of the P1 process, where there is an increase and a decrease of 10% at the copper selling price, respectively. In the destination tab, “P1 Cu +10” and “P1 Cu -10” were the values used for the process.

4) PriceUp-RampUp and PriceDown-RampUp

These scenarios consider a 10% copper selling price increase and decrease, and a ramp-up of the production capacity at the same time, as mentioned before.

5) PriceUp-RampUp-Protection300 and PriceUp-RampUp-Protection400

This scenario considers a 10% copper selling price increase and a ramp-up of the products at the same time, as mentioned in the previous scenario. In addition, a restrict mining surface (this constraint is used to prohibit access to this area in a specific timeframe) was included up to the fourth period, since it may represent some legal constraints on a project.

Figure 8: PriceUp-RumpUp-Protections

Other Decision Trees

Below you can see a description of some scenarios of other Decision Trees.

1) MW150 (Geometries Decision Trees)

The MW150 scenario considers different geometries from the base case at the geometric constraints. In this scenario, 150 meters was used as mining width (the horizontal distance between the walls of two surfaces that belonged to consecutive periods), and a vertical rate advance of 180 meters.

Figure 9: MW150

2) AvgCu (Average Decision Tree)

In the AvgCu scenario, blending constraints were added in the average tab to consider allowed at the process plant 0.5% as a minimum and 0.7% as a maximum average copper grade. The optimization will have to fulfill the P1 process capacity and, as an additional challenge, it has to meet this new set of parameters related to the Cu average content within the ore.

Figure 10: AvgCu

3) Proc13000h and Proc13000h-33Mt

(Process throughput Decision Tree)

Scenario Proc13000h considers 13.000 hours of processing equipment used as the maximum limit. This constraint was inserted at the sum tab and it controls variables such as rock type feeding, energy consumption, and any parameter controlled by its sum. Scenario Proc13000h-33Mt considers an increase of 10% in the production, inserted at the production tab beyond the parameters mentioned previously.

Figure 12: Processing hours

4) Yearly-TriannualProduction

(Short-Long Term Integration Decision Tree)

This considers a yearly production for period range 1-4 and triannual planning for range 5-end. This way is possible to integrate both short and long-term planning in a single run, facilitating the analysis and strategic definitions. 

Any kind of timeframe can be used according to your needs.

Yearly-Triannual Production Example
Figure 13: Short-Long Term Integration

Translations

MiningMath supports and encourages the translation of its knowledge base to multiple languages. If you would like to translate our knowledge base and have your profile advertised please contact us.

Portuguese

Ask GPT

You can use ChatGPT to help you with our knowledge base. First, you will need to have the Plugins options enabled on GPT-4.

After that, choose the AskYourPDF option:

Finally, you should enter the following prompt:

For the requests all along this chat, consider the following content: https://miningmath.com/Knowledge-Base.pdf

Other prompts can help you with different requests. For example, you can ask GPT-4 to act as your own technical support agent that answers in the same language as your question:

Plugin AskYourPdf, consider the following content: https://miningmath.com/Knowledge-Base.pdf

Please answer the following question as a technical support agent, coming from a MiningMath user, in the same language as the question:

"QUESTION TEXT TO BE REPLACED"

Essential Topics

How to Run a Scenario

Video 1: Downloading MiningMath.

On MiningMath’s interface, you will find the Marvin block model and its scenarios (Figure 1). It is possible to preview the scenario and its parameters before opening it (Figure 2).

Choose and open Base Case, click the “Overview” tab (Figure 3) to check the parameters, and then click on “Run” to run the optimization (Figure 4).

After that, a short report with the results will be generated. To view it, check all the boxes on the “Load Options” window and click on “Load” (Figure 5).

Finally, whenever you feel ready to run your own scenarios, start by formatting your data here.

Common Issues: Setting your first scenario

When setting up your first scenario, you may come across some situations such as unavailable tabs and some fields marked in red. These situations are quite simple to resolve, as shown in the following video:

Play Video

Results of the Optimization

By default, MiningMath generates an Excel report summarizing the main results of the optimization. It also creates outputs of mining sequence, topography, and pit surfaces in .csv format so that you can easily import them into other mining packages.

Viewer

The 3D viewer enables a view of your model from different angles. The block colors are defined accordingly with each property displayed, varying from blue to red (smallest to largest), due to destinations, periods, or any other parameter. Therefore, it’s possible to filter the blocks by the period in which they were mined or processed, for instance. In addition, it also allows you to compare multiple scenarios by loading different cases and using the left bar to change from one to another.

Output Files

After optimizing your block model and running your scenario(s), MiningMath generates standard output files with detailed reports. The main files have a universal format (.csv), which allows you to easily import them onto other mining packages to start your mine design and further steps of your projects.

To open the project folder, click on the scenario’s name with the right button of your mouse and choose “Show in the Explorer“. The optimization’s main output files are:

  • Scenarioname.xlsx: Short report with the main results.

  • MinedBlocks.csv: Detailed report which presents all the blocks that have been mined.

  • Surface.csv: Grid of points generated through the pit each period.

Scenarioname.xlsx

Provides you with a short report with the main results of the optimization: several charts and sheets in which you can analyze the production on each period, the stockpiles by periods, the average grade of processes and dump, NPV per period, the cumulative NPV (Net Present Value), etc.

Figure 18: Graphic results

MinedBlocks.csv

This file offers a detailed report on all the mined blocks and their specificities: information on the mining sequence based on each block extracted, along with mined and processed periods, destinations, economic value, and all information used for the optimization. This file also allows you to identify blocks that were stocked and the algorithm decision-making process.

Figure 19: Mined blocks

Surface.csv

The surface scenario brings a grid of points generated through the pit of each period: each surface is named according to its mining periods and contains information about the topographic coordinates at that time. These files can be imported into the viewer separately, so that you can verify and validate your data before starting the optimization process. Note: The surfaces are exported/imported from/at MiningMath in Coordinates.

Figure 20: Surface's CSV

Video 1: Outputs and files hirearchy.

Play with Predefined Scenarios

MiningMath allows you to learn with each scenario by providing standard parameters which simulate some common constraints a mining company may face. Standard scenarios are listed and described below so you can identify the main changes made within the “Overview” tab.

The ultimate goal of this practice is to prepare you to build Decision Trees, which allow you to organize scenarios in order to understand how variables influence one another and, consequently, how these variables determine the final NPV.

Figure 1: Scenarios on the Home Page

Dataset

The examples in this page come preinstalled with every version of MiningMath. If you have deleted this project by any chance, please download the zip file below, extract the files and choose the “Open Project” option in MiningMath.

BaseCase

The Base Case consists of the initial scenario, with a uniform production capacity, and without sum, average, or surface mining limits.
Figure 2: BaseCase overview

BaseCase-RampUp

While the base case considers a uniform production capacity, the BaseCase-RampUp scenario offers the possibility to vary the levels of production within the various timespans. We have an initial production capacity of 10Mton on the first 2 periods; 20 Mton on periods 3 and 4; and 30 Mton from period 5 until the end of the mine’s lifetime, with a total movement constraint of 30, 60, and 80 Mton, considering the increase of production within the time-frames mentioned.

Figure 3: BaseCase-RampUp

PriceUp and PriceDown

Scenarios “PriceUp” and “PriceDown” differ in relation to the basic scenario in the economic value used for the calculation of the P1 process, where there is an increase and a decrease of 10% at the copper selling price, respectively. In the destination tab, “P1 Cu +10” and “P1 Cu -10” were the values used for the process.

PriceUp-RampUp and PriceDown-RampUp

These scenarios consider a 10% copper selling price increase and decrease, and a ramp-up of the production capacity at the same time, as mentioned before.

PriceUp-RampUp-Protection300 and PriceUp-RampUp-Protection400

This scenario considers a 10% copper selling price increase and a ramp-up of the products at the same time, as mentioned in the previous scenario. In addition, a restrict mining surface (this constraint is used to prohibit access to this area in a specific timeframe) was included up to the fourth period, since it may represent some legal constraints on a project.

Figure 8: PriceUp-RumpUp-Protections

Below you can see a description of some scenarios of other Decision Trees.

MW150

The MW150 scenario considers different geometries from the base case at the geometric constraints. In this scenario, 150 meters was used as mining width (the horizontal distance between the walls of two surfaces that belonged to consecutive periods), and a vertical rate advance of 180 meters.

Figure 9: MW150

AvgCu

In the AvgCu scenario, blending constraints were added in the average tab to consider allowed at the process plant 0.5% as a minimum and 0.7% as a maximum average copper grade. The optimization will have to fulfill the P1 process capacity and, as an additional challenge, it has to meet this new set of parameters related to the Cu average content within the ore.

Figure 10: AvgCu

AvgCu-Stock5Mt

Here, the same blending constraints of the previous scenario (AvgCu) were added, in addition to a stockpile limit of 5Mton for process 1, on the destination tab. This feature allows you to control the stock limit of your whole process, which increases the optimization flexibility to feed the plant, while respecting the blending constraints that were already implemented.

Figure 11: AvgCU-Stock5Mt

Proc13000h and Proc13000h-33Mt

Scenario Proc13000h considers 13.000 hours of processing equipment used as the maximum limit. This constraint was inserted at the sum tab and it controls variables such as rock type feeding, energy consumption, and any parameter controlled by its sum. Scenario Proc13000h-33Mt considers an increase of 10% in the production, inserted at the production tab beyond the parameters mentioned previously.

Figure 12: Processing hours

Calculator

This feature allows the user to manipulate their project inside MiningMath, enabling adjustments and new field creation. Figure 1 shows a general view of the calculator. On the left side we have the block parameters and on the right the calculator itself, where the calculation can be done.

Figure 1: Calculator.

To manipulate the calculator just insert a name for the new field, select the type of field (to know more about field types, access this link), and build your expression. In case of a more complex expression, just mark the field “Logical Test” to enable conditional features. To manipulate the calculator just insert a name for the new field, select the type of field (to know more about field types, access this link), and build your expression. In case of a more complex expression, just mark the field “Logical Test” to enable conditional features.

  • + : Addition

  • - : Subtraction

  • * : Multiplication

  • / : Division

  • % : Modulus

  • ** : Exponential

  • // : Floor division

Practical approach

To facilitate the understanding, let’s work on some examples. You can see below a generic math expression (left), and its equivalent written on MiningMath’s calculator (right)

\((x^2)\times(\frac{y}{2}-1)\) x**2*((y/2)-1)

Adding a field without logical expression

Using an example of the Marvin’s Economic Value calculation, we are going to add a Block Tonnes field, as the figure 2:

Figure 2: Adding a new field

Adding a field with a logical expression

One more time using Marvin’s block model, let’s suppose we want a maximum slope angle of 45 degrees.

First, we name our Field, in this case, will be “SlopeMax45d”, select the field type as “Slope” and check the Logical Test box. Then a double click on the Slope field select it and already put it on the Expression. The next step is to select the operator, as we want a maximum of 45 degrees, we choose the operator “>” and insert the value 45 in the text box. If the value is true, that is, if this value is bigger than 45 it will now have the value of 45 assigned to it. If the value is false, i.e., lower than 45, then it will keep its value. Figure 3 shows this calculation:

Figure 3: Logical test expression

During the expression construction, green or red lines will underline it, highlighting the correct parts and the ones that need adjustments to become correct. When it is all set, just click on “Add field” and this new field will be available for use on the project on its correct field type assignments. In case the user needs to delete a field, just go to the parameters option, select and delete it.

Removing a field

To remove an existing field, go to the “Parameters” tab, select the desired field and click “Remove”.

Figure 4: Removing a field

NPV Calculation

The following video explains more about the NPV calculation made by MiningMath’s algorithm. The understand of these steps might be useful for users working on projects with variable mining costs, which are not yet smoothly implemented on the UI.

Video 1: NPV calculation.

The discount rate (%/year) is provided by the user in MiningMath’s interface, as depicted in the figure below.

Figure 1: Interface example to define discount rate (%/year)

In a usual scenario period ranges are defined by annual time frames, as depicted in Fig-2.

Figure 2: Interface example with annual time frame

In this case, the annual discount rate multiplier (annual_multiplier) to return the discounted cash flow is performed as follows:

\(\text{annual_multiplier}(t) = \) \(\frac{1}{(1 + \text{input discount rate})^t}\)

The table below exemplifies one case for 10 periods.

Period Process 1 Dump 1 NPV (Discounted) M$ Annual multiplier Undiscounted NPV M$
1
P1
Waste
1.2
0.909
1.320
2
P1 +5%
Waste
137.9
0.826
166.859
3
P1 +5%
Waste
132.5
0.751
176.358
4
P1 +5%
Waste
105.4
0.683
154.316
5
P1 -5%
Waste
89
0.621
143.335
6
P1 -5%
Waste
92
0.564
162.984
7
P1 -5%
Waste
91.3
0.513
177.918
8
P1 -10%
Waste
52.3
0.467
112.110
9
P1 -10%
Waste
54.3
0.424
128.037
10
P1 -10%
Waste
12.1
0.386
31.384

Table 1: Example of annual multiplying factors and undiscounted cash-flows for a 10% discount rate per year. Process 1 exemplifies the use of different economic values per period.

In details, Table 1 lists:
  1. the NPV (discounted) resulting from a 10 yearly period with a 10% discount rate per year.

  2. the annual discount rate (annual_multiplier) for each period; and

  3. the undiscounted NPV as the result of the discounted NPV divided by the annual_multiplier.

MiningMath allows the creation of scenarios in which period ranges are defined with custom time frames (months, trienniums, decades, etc.), as depicted in Figure 3.

Figure 3: Interface example with custom time frames

In this case, the discount rate is still provided in years on the interface. However, the discount rate per period follows a different set of calculations. To identity the correct multiplier (discount rate for a custom time frame) applied to each custom time frame, it is necessary to apply the formula below:

\( \text{mult}(t) = \frac{1}{(1 + \text{discount_rate}(t)) ^ {\text{tf_sum}(t)}}\)

where:

\(
\text{tf_sum}(t) = \sum_{i=1}^{t}\frac{TF(i)}{TF(t)}
\)

and

\(
\text{discount_rate}(t) = (1 + \text{annual_discount_rate})^{TF(t)} – 1
\)

and

\(
TF(t)=
\begin{cases}
1,& \text{if}\, t\, \text{is in years}\\
\frac{1}{12},& \text{if}\, t\, \text{is in months}\\ 3,& \text{if}\, t\, \text{is in trienniums}\\
etc.&
\end{cases}
\)

For example, to calculate the multiplier of the first period in figure 3, the equation would be:

\( TF(1) = \frac{1}{12} = 0.8333… \)

\( \text{tf_sum}(t) = \sum_{i=1}^{1}\frac{TF(1)}{TF(1)} = 1 \)

\( \text{discount_rate}(1) = (1 + \text{annual_discount_rate})^{TF(1)} – 1 = (1 + 0.1) ^ {1/12} – 1 = 0.007 \)

\( \text{mult}(1) = \frac{1}{(1 + \text{discount_rate}(1)) ^ {\text{tf_sum}(1)}} = \frac{1}{(1 + 0.007)^{1}} = 0.993 \)

Another example, to calculate the multiplier of period 15 in figure 3, the equation would be:

\( TF(15) = 3 \)

\( \text{tf_sum}(t) = \sum_{i=1}^{15}\frac{TF(i)}{TF(15)} = 2 \)

\( \text{discount_rate}(15) = (1 + \text{annual_discount_rate})^{TF(15)} – 1 = (1 + 0.1) ^ {3} – 1 = 0.331 \)

\( \text{mult}(15) = \frac{1}{(1 + \text{discount_rate}(15)) ^ {\text{tf_sum}(15)}} = \frac{1}{(1 + 0.331)^{2}} = 0.564 \)

Evaluate Project Potential

Certain constraints related to your project can be defined so that you can understand its maximum potential. The surface generated in this case could also be used as a restrict mining in the last period to reduce the complexity of your block model and the runtime of MiningMath, since it includes a set of constraints inputted.

Example

  • Set up a scenario with 1,000 Mt in the processing plants, which corresponds to a lot more mass than expected in the whole life of the mine.

  • Add the Minimum Bottom Width (100m). This constraint will allow you to have a suitable work-front for your equipment.

  • Restrict Mining surface, if you have this constraint in your project.

  • Grade constraint until 0.7%.

  • Timeframe: Years (1), since it would all be processed in 1 period.

Note: Sum constraints can restrict the total amount of handling material (ore + waste) of the mine. Therefore, do not use them in the validation.

Using results

Now that Constraints Validation step is done, you are able to use this final surface as a guide for future optimizations. This approach reduces the runtime and the complexity of the algorithm because when you take it into account, the blocks below this final optimized surface won’t be considered and the heuristics inside the interface would be facilitated. Notice that we did not make any change in the discount rate, thus, this first NPV does not represent the reality. If you need an accurate result in this step, make sure to adjust it.

It’s important to remember that when we restrict the mining into this surface, the number of periods generated in future runs could be reduced because the average parameters of each one will have to meet the constraints of the overall package. Therefore, to achieve the same parameters in a lower timeframe, some blocks may be discarded due to the mining sequence and the optimization of destinations inside the whole mass.

Having this idea in mind, you should already have enough information to decide and structure the next step of the optimization. Based on the amount mined in the last item and in the processing capacity, define a good timeframe to identify the mining sequence. In this case, we had 231 Mt as the ore total mass to split in almost 23 years, since the processing capacity is 10Mt.

To improve efficiency in the optimization, before working on a yearly basis, we made the decision to consider the first 5 years. It is reasonable to generate a 10-year surface to consider the optimization inside this limit due to observations made before. Remember that each assumption here can be done accordingly with your project’s demands and that MiningMath can work with any timeframe to meet your needs.

Decision Trees

Comparing Scenarios

Decision Trees provide you with a detailed broad view of your project, allowing you to plan your mining sequence by analyzing every possibility in light of constraints applied to each scenario, which options are more viable and profitable to the global project, as well as how these factors impact the final NPV. Consider, for instance, the plant production per year as a variable factor. Using Decision Trees (Figure 1), you will be able to analyze how each constraint, e.g. the ore price, affects that year’s production and benefits or not the global project.

Figure 1: Essence of a Decision Tree, done in presentation software.

By running all the scenarios individually, just like what you did on Practice First, you will be able to identify how each change, within a set of constraints, impacts the NPV results and the mining sequence generated (Figure 2 and 3), which provides you a broader view of your project and enables you to decide which route you should take to generate value to your company.

How to Analyze Multiple Scenarios

Increase in the value of copper

Analyzing first the scenario in which there is a change in the economic value of the P1 process (“scn-PriceUp”), values such as NPV would naturally be different. In this case, analyzing the NPV and the total movement (Figure 3), it’s possible to understand that a different mining sequence was generated, which increased the mine’s lifetime by one period. This market change has also increased cumulative NPV (Figure 4) values based on its direct relation with the copper selling price. The charts below were made with the help of MiningMath’s results in simple spreadsheet software.

Figure 4: Total mass (Process+waste) handled on each scenario.
Figure 5: Cumulative NPV contrasts.

Adding an average grade limit

Now we can analyze the scenario in which a restriction in the average grade at P1 process was added, using a minimum and a maximum limit of copper (“scn41-AvgCu”). The blocks that would be processed would have to meet established targets, allowing a better selectivity of what should be processed or not. The ones which have higher or lower grades than required could be blended with others to generate an average grade that respects the constraints and improves the NPV.

Notice that there was a higher total production (Figure 5) in each period, caused by the increase of the stripping (ore/waste) ratio to meet the 30 Mtons of ore production at P1 Process and the average grade targets settled at the “scn41-AvgCu” scenario. A better stock pilling use is expected, in order to use all the blending capabilities and decision-making intelligence of the algorithm to decide which blocks could be mixed to fulfill the plant capacity. In addition, the cumulative NPV (Figure 6) shows that by inserting average grade constraints we consequently reduce the algorithm flexibility and lose some money to keep the operational stability frequently required at a processing plant.

In general, the main goal of MiningMath, considering the set of constraints provided, is to maximize the cumulative NPV in the shortest mine lifetime possible, which would reduce the project depreciation by interest rates. The charts below were made based on MiningMath results with the support of spreadsheet software.

Figure 6: Total mass handled on each scenario.
Figure 7: Cumulative NPV contrasts.

Building Decision Trees

You have been introduced to some of MiningMath’s functionalities. Now let’s take a closer look at how decision trees are built.

Mine project evaluation largely relies on technology from the 1960’s, in which a step-wise process is usually necessary along with time-consuming activities, like pit-design, in order to create only one single scenario. Evaluating projects through this approach could take from weeks to months of multidisciplinary work just to produce a couple of scenarios. This process is often guided by some arbitrary decisions that may constrain the mathematical solution space, confining solutions to engineering expertise and judgment.

global optimization scheduling can speed up the process of generating multiple scenarios for project overview prior to detailed work. MiningMath integrates the business’ areas and allows managers to improve their decision-making process by structuring their strategic analysis through multiple decision trees with a broader and optimized view of their projects, comprising constraints from different areas of the company.

The following video shows a few possibilities recognized only when seeing the available paths to create value. The video is oriented to technical daily usage but also covers interesting subjects for the managerial perspective. For the last case, skip straight to minute 15:23.

Video 1: Video detailing the building of decision-trees.

Apply to your projects

Now that you have played with the sample data, it is time for a hands on approach and apply this optimized strategy to your own projects!

MiningMath already allows you to structure your Decision Trees layout at its home page, which facilitates and guides the decision-making and mining planning processes.

Take advantage of the possibility to add (+), rename, or delete Decision Trees (Figure 7), by clicking with the right button at their names and/or exchange scenarios (Figure 8) between trees to build different mining planning strategiesThe icon is a shortcut, so you can easily open your scenario’s full report.

Compare everything in a single look and identify how each change impacts your results to build your own analysis by using presentations based on MiningMath charts as shown in Figure 1.

Interface Overview

Home Page

MiningMath automatically opens on the home page, as shown below.

Two main areas are acessible from the home page:

  1. Recent projects:This section allows you to select a project. Right-clicking on the project name provides you with a range of options:
     

    New scenario: The "Scenario Config" window will open, allowing you to choose which decision tree to place your new scenario in. You can also enter a name and a description for it. After that, you'll be directed to the Scenario tab to set it up

    Here's a more user-friendly version:

    Show in Explorer: This option opens the directory containing the folder with your project files and data.

    Remove from list: This option removes the selected project from the Recent Projects list

    Delete project: This option permanently deletes the project and all associated scenarios.

  2. MiningMath menu: this provides quick access to essential functions.

    Here's a description of each item:

    New Project: Use this option to start a fresh project. Clicking on it will allow you to create and configure a new mining optimization scenario from scratch.

    Open Project: This allows you to open an existing project that you have previously worked on. You can browse your files and select the project you want to continue.

    License: Clicking on this option takes you to the licensing section, where you can manage your software license, check its status, or enter a new license key. More info about license can be seen here.

    Help: This displays a new window with key software information and links to other essential resources.

    Close: This option closes the current session or the entire application.

Once a project is selected, the Decision Trees and Model areas will be displayed (as depicted below).

Both sections provide key information about project results and block model parameters. They can be used as follows:

  1. Decision Trees: This feature allows you to quickly navigate through recent scenarios without needing to open them from their original folders.

    Additionally, it lets you create new tabs and organize your mining planning strategies by exchanging scenarios as needed. You can access all paths involved in the project, giving you a comprehensive view that enhances your decision-making process (read more). To open any scenario, right-click and select the "Open" option.

     

    Here’s a more user-friendly revision:

    Alternatively, you can select the scenario and click “Open” at the bottom right of the screen. The “View” button next to it will take you to the Viewer tab instead. 

    This area displays scenarios for each tree, providing key information such as name, description, NPV (M$), runtime, and a direct link to the sheet containing all the results of the scenarios (available after execution).

    Several options are available to manage Decision Trees:

    1) Add new trees by clicking on “+” 

    2) Rename a tree by double-clicking its name

    3) By right-clicking on the tree name, you can add a new scenario, rename the tree, or delete it.

    For more options, right-clicking on a scenario’s name reveals hidden choices: open, view model, rename, show in Explorer, delete, and transfer it between decision trees.

    Lastly, the scenario description can be easily edited with a double click.

  2. Model Table: This section provides key information about your block model and its parameters, allowing you to easily review it at any time using the "Edit" button. This will take you to the Calculator functionality.

Model Tab

This tab lets you modify your block model to meet your project requirements. You can also export the block model in CSV format for use with other software packages.

Parameters tab

The Model tab begins with the Parameters option, displaying your data from the previous setup during import, along with all other existing fields. You can also remove any parameter if needed.

Function tab

The Function tab features the Validate Block Parameters table, allowing you to select a single field in your model to verify its values. It also includes an internal Calculator for making adjustments and adding new fields to your block model.

Viewer Tab

MiningMath’s 3D Viewer enhances your workflow by providing a comprehensive visualization of your block model, optimization results, and surfaces from various angles. This tool allows you to filter and customize displayed features, offering a quick and efficient overview of your data and optimization processes.

Model properties and scenario results

After running your scenario, the MinedBlocks.csv file will display its results on the 3D viewer, allowing you to see your model from different angles. By selecting “Period Mined,” you can view the mining sequence period by period.

By selecting a surface, you can identify topography changes for each period and adjust its opacity, making visualization easier.

MiningMath also lets you import existing surfaces by placing them in the same folder as your other files, allowing you to validate their geometry if needed. Click on Load Scenario to import multiple scenarios and compare them, helping you extract the best results according to your project constraints.

Scenario Tab

After importing a model, you can manage the scenario setup in the Scenario Tab. The setup process is divided into several guided steps, helping you configure all necessary parameters. Before running the optimization, you’ll receive a summary of the entire setup for your review.

Feel free to explore each step using the links below or by navigating the page tree on the left side of the knowledge base.

Parameters

Constraints

Execution

Handling Data

Formatting the Block Model

Block Model Basic requirements

MiningMath requires the following formatting specifications:

  1. Regularized block model: This means all blocks must be the same size.

  2. Air blocks must be removed prior to importation. This is the way MiningMath recognizes the topography.

  3. Coordinates of each block in the 3 dimensions.

  4. Header Names should not have special characters or have them exceed 13. Use this recommendation for folders and files also.

  5. The data format should be a CSV file (Comma Separated Value), which might be compatible with most mining packages.

Good practices

  1. Configure Microsoft Windows number formatting to use dot as the decimal separator.

  2. Use the metric system.

  3. Set multiple fields that will consider different economic values, material types, contaminant limits, and any other variable you wish to analyze or control.

Must check

Understanding Field Types

Field Types are the fields MiningMath can understand. Each column imported should be assigned to the proper field type so that the software treats each variable accordingly with its meaning.

Figure 1: Field types

Mandatory Field Types and their meanings

  1. Coordinates X, Y, and Z refer to your geo-referenced information.

  2. Average refers to any variable that could be controlled by means of minimums and maximums considering its average: grades, haulage distance, and other variables.

  3. Economic Value refers to the columns with the economic value, which represent the available destinations. It is possible to import multiple economic values at once, and they may be used simultaneously (ex.: multiple processing streams) or calculated in the internal calculator mentioned on the next page.

Optional Field Types and their meanings

  1. Density refers to the block's density. This field is used to calculate the block's tonnage.

  2. Slope refers to slopes varying block-by-block, which gives the flexibility to define slopes by lithotype and sectors.

  3. Recovery refers to recoveries varying block-by-block.

  4. Sum refers to any variable that could be controlled by means of minimums and maximums considering its sum.

  5. Predefined destinations refers to possible fixed destination values. This can be used for example if you want to define pushbacks or apply lithologic restrictions that prevent certain blocks to be processed. However, by fixing destinations you are impeding MiningMath to reach its full potential. More about this here.

  6. Other refers to information that you with to have in the exported outputs.

  7. Skip refers to any variable that should be ignored. This field type might help improving the runtime since these variables will not be considered and exported along with the optimization outputs.

Field names shortcut

Shortcuts can be used for automatic recognition in the importation process. These are listed in the table below.

Field name Shortcuts
Coordinates
X | Y | Z
Average
@ | grade
Density
% | dens | sg
Economic value
$ | dest | val
Recovery
* | recov
Slope
/ | slope
Sum
+
Skip
!

Mandatory requirements

Considering the specifications mentioned before, the formatted data set should have the following information for each block:

  1. Coordinates.

  2. Grades (at least one element assigned as Average).

  3. Economic values (at least 1 process and 1 waste).

The following video gives an introduction on how to setup your block model.

Video 1: Block Model setup.

Attention to software conversions

The model’s origin must be placed at the bottom portion, starting to count from the minimum coordinates at X, Y, and Z.

Figure 1 highlights a block model origin at the corner of the first block and the coordinates on its centroid.

Each software uses its own conventions for data format, naming and numbering systems, etc. These differences should be observed to prevent conflicts when transiting data from multiple software, each one for one specificity.

What you must know:

  1. MiningMath uses coordinates (X,Y,Z) for which Z, which represents the elevation, starts upwards (Figure 3a).

  2. Other mining software may use indexes with IZ starting downwards (Figure 3b). MineSight is an example that uses this notation.

Figure 2: Blocks Matrix.

There is no right or wrong convention, but there is a correct procedure for each software.

To invert coordinates use the following formula to convert:
\(new(Z) = max(Z) + 1 – current(Z)\)

Figure 3a: The lowest IZ value is at the bottom of the model.
Figure 3b: The lowest Z value is at the top of the model, which will not fit MiningMath requirements.

Air Blocks

MiningMath recognizes that all imported blocks of your model are underground. This means it is necessary to remove all the air blocks prior to importation. Unless your topography is totally flat, which is unlikely, the image below shows an example of your model should be displayed.

The non-removal of air blocks may lead to unsatisfactory results and long processing times, since it would be considering blocks that do not exist in reality.

Figure 4: Example of how block models should look like with a rectangular base.

More Details on Air Blocks

The following video shows how to do remove air blocks using filters on MS Excel. These tips are also applicable to any mining software of your choice.

Video 1: Removing air blocks using filters on MS Excel.

Importing the Block Model

Block Model File

To import the block model, select the option New Project on the left panel of MiningMath (Figure 1). 

Figure showing where to create a new project to import a new model.
Figure 1: Creating a new project to import a new model.

Afterwards, the file name input field is shown in red, indicating a mandatory field. (Figure 2) Browse for and select the CSV formatted file. Press Next to advance.

Figure 2: Importing a CSV model.

Project Naming

In the next window, shown in Figure 3, the Model Name must be entered.

Optionally, the destination folder (Model Folder) can be changed as well as the Scenario Name, and a Scenario Description can be added.

Figure 3: Defining a name for the model and the first scenario.

Imported Fields & Validation

Upon clicking Next, the following window will provide a statistical summary of information for the block model that will be imported (Figure 4).

Check the parameters carefully.

Figure showing the interface to validate your data.
Figure 4: Validating your data.

Geo-reference system, Origin, Dimension and Rotation

Upon clicking Next, the CSV file will be imported into MiningMath, and show data related to the block model geo-reference system, that can be only coordinates. The next steps are to place the rotation degrees (Azimuth rotation), origin accordingly with your mining package, and the block dimension as illustrated in Figure 5. The number of blocks is automatically calculated after the origin and dimensions are provided.

The origin of this project was x=3,475, y=6,480, and z=285, and the block dimensions were 30 meters in each coordinate.

Figure 4: Coordinates input.

Rotated models

MiningMath supports the use of block models that have been rotated using an Azimuth rotation (Figure 5).

Figure 5: Example of Azimuth rotation in the coordinate system.

The amount of rotation degrees can be passed as depicted in Figure 6.

Figure 6: Azimuth rotation depicted when hovering over the RZ field.

After importing, you can see the rotated model in the Viewer tab (Figure 7). The detailed steps with mathematical formulations for the rotation procedure can be seen here.

Figure 7: Example of rotated model in the viewer tab.

Field Type Assignment

When Next is selected, the following form will appear (Figure 8), showing correlations between the imported CSV file header and the available field types in MiningMath.

You must associate each imported column to one of the options located just above the table, for instance: block coordinates X, Y, and Z to Coord. X, Y, and Z field types. For more details on how you can correlate each column, access this link. You can also keep the original data from your previous Mining Package, by using this approach.

If you do not already have an Economic Value function, when importing your block model, you will be directed to the Scenario tab. Then, click on the Function tab to calculate your Economic Value function in the internal calculator as explained here.

Figure 8: Assigning each column to the proper field type.
Notes
  1. MiningMath has mandatory variables (columns) to be assigned to the proper Field Type:

    1) Coordinates (X, Y, Z).

    2) Average

    3) Economic Values (at least two)

  2. Validating data screen might be overlooked, but it is very important to validate one's data based on minimums and maximums. Read more.

  3. Each column imported should be assigned to the proper field type in order for MiningMath to treat each variable accordingly. Read more.

  4. Typically, MiningMath recognizes some columns automatically when their headers are similar to the Field Type name. Otherwise, the MiningMath will automatically assign them to the Field Type sum.

    To enable the Next button, the user needs to assign each one of the mandatory variables to their respective Field Type

Grade, Dimension and Origin

After clicking Next, it will demand grade units. As you can see in Figure 10, the copper grade has been defined as a percentage (%), while gold grade was defined as PPM, which stands for parts per million and, in turn, is equivalent to g/ton.

Figure 10: Informing block dimensions, origin, and grade units.

View Your Model and Surfaces

After filling in the required fields, the options View Model and Scenarios will be enabled. Before setting up your first scenario you can view it by clicking in the Viewer and Load scenario. Select all the tooltip options and click in load. This option also allows you to view surfaces created, just place them in the scenario folder before loading and do the first validation.

Evaluate your model

After importing your model, you can view it in the Viewer tab as depicted in Fig. 11-14. This should help you answer questions such as:

  1. Where are the high grades distributed?

  2. Does the process economic values, above zero, match with the regions identified in the last question?

  3. How are waste economic values distributed? Are maximum and minimum values reasonable when you compare them with the process?

Economic Values

MiningMath does not require pre-defined destinations ruled by an arbitrary cut-off grade. Instead, the software uses an Economic Value for each possible destination and for each block. The average grade that delineates whether blocks are classified as ore or waste will be a dynamic consequence of the optimization process.

Destinations required

MiningMath requires two mandatory destinations at least:

Therefore, each block must be associated with:
  • 1 Processing stream and its respective economic value

  • 1 Waste dump and its respective economic value

Notes:
  • Even blocks of waste might have processing costs in the economic values of the plant. Therefore, non-profitable blocks would have higher costs when sent to process instead of waste.

  • If you have any material that should be forbidden in the plant, you can use economic values to reduce the complexity and runtime, as mentioned here.

Simplified flow-chart of blocks’ destinations optimization. 

Calculation

Each field related to Economic Value (Process/Waste) must report the value of each block as a function of its destination (Process or Waste in this example), grades, recovery, mining cost, haul costs, treatment costs, blasting costs, selling price, etc. The user is not required to pre-set the destination, as the software will determine the best option during the optimization.

To calculate the Economic Values you can use MiningMaths’s internal calculator, available at the “Function” option inside the “Model” tab. To illustrate the calculation of economic values, an example is shown below. The calculation parameters are listed in Table 1.

Description Cu (%) Au (PPM)
Recovery
0.88
0.6
Selling price (Cu: $/t, Au: $/gram)
2000
12
Selling cost (Cu: $/t, Au: $/gram)
720
0.2
Processing cost ($/t)

                  4

Mining cost ($/t)
                 0.9
Discount rate (%)
                 10
Dimensions of the blocks in X, Y, Z (m)
          30, 30, 30

Table 1: Parameters for calculating the economic values.

Figure 1: Internal Calculator.

Block Tonnes

  • Block Tonnes = BlockVolume * BlockDensity

  • Block Tonnes = 30*30*30*[Density]

Figure 2: Block model calculations.

Tonnes Cu

  • Tonnes Cu = Block Tonnes x (Grade Cu/100)

  • Tonnes Cu = [BlockTonnes]*([CU]/100)

Figure 3: Block model calculations.

Mass Au

  • Mass Au = Block Tonnes x Grade Au

  • Mass Au = [BlockTonnes]*[AU]

Figure 4: Block model calculations.

Economic Value Process

  • Economic Value Process =
    [Tonnes Cu x Recovery Cu x (Selling Price Cu – Selling Cost Cu)] +
    [Mass Au x Recovery Au x (Selling Price Au – Selling Cost Au)] –
    [Block Tonnes x (Processing Cost + Mining Cost)]

  • Economic Value Process = ([TonnesCu]* 0.88 * (2000–720)) + ([MassAu] * 0.60 * (12 – 0.2)) – ([BlockTonnes] * (4.00 + 0.90))

Formula for economic price
Figure 5: Process Economic Value calculation.

Economic Value Waste

  • Economic Value Waste = –Block Tonnes x Mining Cost

  • Economic Value Waste = –[BlockTonnes] * 0.9

Figure 6: Economic Value Waste calculation.

The example block in Figures 4-6 would generate -299,880$ if it is sent to the process, and –55,080.1$ if discarded as waste. Therefore, this block might go to waste, since it would result in less prejudice than when it is processed. MiningMath defines the best destination regarding the set of constraints throughout the time, thus this decision a lot more complex than the example above in most cases.

Data Validation

Running an optimization for complex projects with several constraints may demand hours only to validate if the formatting has been done properly. Therefore, we present here an efficient scenario to quickly validate your data.

This page uses the Marvin Deposit as an example. To see its parameters and constraints please check the page here.

Validate it First

In order to validate your data and cut its runtime, we strongly recommend running MiningMath FULL with the following set up:

  1. Process and dumps set with respective recovery values.

  2. A bigger production capacity than the expected reserves. In this example, the expected life of mine vs production rate is 35-year producing 10 Mt per year. Hence, a value of 1,000 Mt would be big enough to cover the whole reserve.

  3. No discount rate.

  4. No stockpiling.

  5. Density and slope values.

  6. Timeframe: Years (1), since it would all be processed in 1 period.

The figure below depicts this set up at MiningMath, with the highlighted fields.

Results

Results are depicted below, with blocks in the sequencing, surface, surface with blocks and production tonnage.

Ultimate pit

The surface returned by this data validation process represents the most economically viable pit shell, also known as the ultimate pit.

Questions

  • Did the scenario run properly?

  • Are most of the positive economic values from the process inside this surface?

  • Is the mining happening in reasonable areas?

  • Is there a reasonable number of periods of life of mine?

Constraints Validation

Continuing the data validation, start to add the first constraints related to your project so that you can understand its maximum potential. The surface generated in this case could also be used as a restrict mining in the last period to reduce the complexity of your block model and the runtime of MiningMath, since it includes a set of constraints inputted.

Example

  • Set up a scenario with 1,000 Mt in the processing plants, which corresponds to a lot more mass than expected in the whole life of the mine.

  • Add the Minimum Bottom Width (100m). This constraint will allow you to have a suitable work-front for your equipment.

  • Restrict Mining surface, if you have this constraint in your project.

  • Grade constraint until 0.7%.

  • Timeframe: Years (1), since it would all be processed in 1 period.

Note: Sum constraints can restrict the total amount of handling material (ore + waste) of the mine. Therefore, do not use them in the validation.

Let's make everything clear

Now that Constraints Validation step is done, you are able to use this final surface as a guide for future optimizations. This approach reduces the runtime and the complexity of the algorithm because when you take it into account, the blocks below this final optimized surface won’t be considered and the heuristics inside the interface would be facilitated. Notice that we did not make any change in the discount rate, thus, this first NPV does not represent the reality. If you need an accurate result in this step, make sure to adjust it.

It’s important to remember that when we restrict the mining into this surface, the number of periods generated in future runs could be reduced because the average parameters of each one will have to meet the constraints of the overall package. Therefore, to achieve the same parameters in a lower timeframe, some blocks may be discarded due to the mining sequence and the optimization of destinations inside the whole mass.

Having this idea in mind, you should already have enough information to decide and structure the next step of the optimization. Based on the amount mined in the last item and in the processing capacity, define a good timeframe to identify the mining sequence. In this case, we had 231 Mt as the ore total mass to split in almost 23 years, since the processing capacity is 10Mt.

To improve efficiency in the optimization, before working on a yearly basis, we made the decision to consider the first 5 years. It is reasonable to generate a 10-year surface to consider the optimization inside this limit due to observations made before. Remember that each assumption here can be done accordingly with your project’s demands and that MiningMath can work with any timeframe to meet your needs.

Exporting Data

Exporting the Model

Select the button Export Model on MiningMath’s Model tab, as shown below.

Figure 1: Clicking on Export.

Clicking on Export, a new page will appear, allowing you to select the folder where the block model exported would be saved with its name.

Figure 2: Exporting data.

Just click on “Next” for your model to be exported to the folder selected.

Public Datasets

MiningMath allows you to learn, practice, and demonstrate, by showing any scenario previously ran the concepts of Strategy Optimization using the full capabilities of using only Marvin Deposit. This version is freely available to mining professionals, researchers, and students who want to develop their abilities considering this standard block model.

Marvin Deposit

DB Information

Below are listed the default parameters for Marvin according to the adaptions made in our formatted model.

Parameter Value
Block size
23000 m³ (X = 30m, Y=30m, Z=30m)
AU - Selling Price
12 $/g
AU - Selling Cost
0.2 $/g
AU - Recovery
0.60
CU - Selling Price
2000 $/ton
CU - Selling Cost
720 $/ton
CU - Recovery
0.88
Mining Cost
0.9 $/ton
Processing Cost
4.0 $/ton
Discount Rate
10% per year
Default Density
2.75 t/m³
Default Slope Angles
45 degrees

Some common constraints applied to the Marvin deposit are listed below.

Constraint Value
Processing capacity
10 Mt per year
Total movement
40 Mt per year
Sum of processing hours
4,000 per year (detailed estimate of the plant throughput)
Vertical rate of advance:
150m per year
Copper grade
Limited until 0.7%
Minimum Mining Width
50m
Minimum Bottom Width
100m
Restrict Mining Surface

Some surface in .csv format. For example due to a processing plant in the area.

Fixed Mining (Stockpiling)
0.9$/t
Rehandling cost (Stockpiling)
0.2$/t

Economic Values

  • Process Function = BlockSize * Density * [GradeCU/100 * RecoveryCu * (SellingPriceCU – SellingCostCU) + GradeAU * RecoveryCu * (SellingPriceAU – SellingCostAU) - (ProcessingCost + MiningCost)]
  • Waste Function = BlockSize * Density * MiningCost

McLaughlin Deposit

DB Information

Below are listed the default parameters for the McLaughlin deposit according to the adoptions made in our formatted model.

Parameter Value
Block size
X = 7.62m (25ft), Y = 7.62m (25ft), Z = 6.096m (20ft)
AU - Selling Price
900 $/oz
AU - Recovery
0.90
Mining Cost
1.32 $/ton
Processing Cost
19 $/ton
Discount Rate
15% per year
Default Density
3.0 t/m³
Default Slope Angle
45 degrees

Economic Values

  • Process Function = BlockSize * Density * [GradeAU * RecoveryCu * (SellingPriceAU) - (ProcessingCost + MiningCost)]
  • Waste Function = BlockSize * Density * MiningCost

Output files

The Execution Options or Run Options allow the user to define:

  • Files to be exported.

  • The visual results to be automatically shown on the viewer after each run.

Figure 1 highlights in (A) where the user can trigger this pop-up window and in (B) the options available, among which the user can:

  • Export/not export to CSV files:

    • The resulting surfaces
    • The resulting model in two ways: all blocks or only mined blocks, with/without coordinates and/or index information.
  • Set which results to be shown on the viewer:

    • Surfaces
    • Model
Figure 1: Execution options.

MiningMath automatically produces:

  • Formatted reports (XLSX files).

  • Tables (CSV) whose data feeds the reports.

  • Updated block model (MinedBlocks or AllBlocks).

  • Surfaces as a grid of points (CSV)

MiningMath organizes files, as listed below:

SSMOD and SSPRJ are important to report any issues you face.

    • Model Folder
    • MiningMath Model file (.SSMOD).
    • MiningMath Project file (.SSPRJ).
    • Scenario folder
      • Output Block Model
        • MinedBlocks.CSV contains information about the mined blocks.
        • AllBlocks.CSV, when requestedcontains information about all blocks.
      • Scenario file (.SSSCN) is a XML file read by the interface. Use it for a quick check on parameters used.
      • Report file (.XLSX) summarizes some quantifiable results, including charts such as productions, average grades, and NPV.
      • MiningMath also generates independent report files (.CSV) present in the report file (XLSX) as a backup:
        • Production Process.
        • Production Dump.
        • Production Total.
        • Grade Process.
        • Grade Dump.
        • Metal Process.
        • NPV.
        • Cumulative NPV.
      • Surface files (Surface-##.CSV) formatted as a grid of points.
  • List Item

SSMOD and SSPRJ are important to report any issues you face.

After each optimization, MiningMath exports the block model in one of two formats:

  • MinedBlocks.csv: The file presents only the blocks that have been mined from each scenario. Mined Blocks are exported by default, as it is a lighter file.

  • AllBlocks.csv: The All Blocks file presents all the blocks, whether mined or not, from each scenario, so it is basically the original Block Model along with resultant information from the optimization.

The resultant model includes all columns imported (except the skipped ones) besides the following information:

  • Mined Block shows whether (1) or not (0) a block have been mined.

  • Period Mined shows in which period a block have been mined (-99 for blocks that have not been mined). To learn more about the mining sequence within a period, access here.

  • Period Processed shows in which period a block have been processed (-99 for blocks that have not been processed).

  • Destination informs the destination of each block — according to the order the user has added processing stream(s) and waste dump(s).

Figure 2 shows where the user can interchange of these options.

  1. Click on the highlighted Execution button (A) to open the Run Options (B).

  2. Select All blocks in model or Only mined blocks, as you need.

  3. Hit OK, then Run.

By default, MiningMath exports the MinedBlocks file as a block model output

MiningMath will generate a report directly on Microsoft Excel, as shown in the following image, and the optimized pit (blocks and surface) in the viewer in case the user chooses this option (right figure above). The automatic preview shows only the mined blocks, colored according to each mining period defined by the scheduler.

The results presented in the Excel spreadsheet show, in the Charts tab, the graphs relative to the reported results calculated in the Report tabThe processed mass results, discarded mass, stock development, Au/Cu percentage in the process, Au/Cu percentage in the dump, metal contained in the process, net present value and cumulative net present value are arranged individually in the Production Process 1, Production Dump 1, Stock Process 1, AU/CU – Grade Process 1, AU/CU – Grade Dump 1, AU/CU – Metal Process 1, NPV and Cumulative NPV tabs, respectively.

Figure 2: Results report.

By default, MiningMath exports only the Mined Blocks file showing them by period on the viewer, as in the following illustration. The user can change any exporting options on Run Options menu.

Figure 3: Visual results.

If the user chooses to export the model, MiningMath will automatically save the list of the scheduled blocks (MinedBlocks.csv) or all blocks (AllBlocks.csv) in the block model folder, as shown in the figure below, which can be imported into other mining software packages.

The files MinedBlocks.csv and AllBlocks.csv may contain indices and/or block coordinates, and all the imported data/parameters along with the following information:

Figure 4: Mined blocks.
  • Mined Block shows whether (1) or not (0) a block have been mined.

  • Period Mined shows in which period a block have been mined (-99 for blocks that have not been mined).

  • Period Processed shows in which period a block have been processed (-99 for blocks that have not been processed).

  • Destination informs the destination of each block — according to the order the user has added processing stream(s) and waste dump(s).

Video 1: Outputs and files’ hiearchy.

Workflow

Super Best Case

In the search for the upside potential for the NPV of a given project, this setup explores the whole solution space without any other constraints but processing capacities, in a global multi-period optimization fully focused on maximizing the project’s discounted cashflow.

As MiningMath optimizes all periods simultaneously, without the need for revenue factors, it has the potential to find higher NPVs than traditional procedures based on LG/Pseudoflow nested pits, which do not account for processing capacities (gap problems), destination optimization and discount rate. Traditionally, these, and many other, real-life aspects are only accounted for later, through a stepwise process, limiting the potentials of the project.

MiningMath vs Tradional Technologies

MiningMath’s Super Best Case serves as a reference to challenge the best case obtained by other means, including more recent academic/commercial DBS technologies available. See a detailed comparison of these two approaches below.

In modern/traditional technology, large size differences between consecutive periods may render them impractical, leading to the “gap” problem. Such a gap is caused by a scaling revenue factor that might limit a large area of being mined until some threshold value is tested. MiningMath allows you to control the entire production without oscillations due to our global optimization.

In the modern/traditional methodology the decisions on block destinations can be taken following some techniques such as: fixed predefined values based on grades/lithologies post-processing cutoff optimization based on economics post-processing based on math programming or even multiple rounds combining these techniques. With MiningMath the destination optimization happens within a global optimization in a single step, maximizing NPV and accounting simultaneously for capacities, sinking rates, widths, discounting, blending, and many other required constraints.

Modern technology is restricted to pre-defined, less diverse sequences because it is based on step-wise process built upon revenue factor variation, nested pits, and pushbacks. These steps limit the solution space for the whole process. MiningMath performs a global optimization, without previous steps limiting the solution space at each change. Hence, a completely different scenario can appear, increasing the variety of solutions.

Due to tonnage restrictions, modern technology might need to mine partial benches in certain periods. With MiningMath’s technology, there isn’t such a division. MiningMath navigates through the solution space by using surfaces that will never result in split benches, leading to a more precise optimization.

Due to tonnage restrictions, modern technology might need to mine partial benches in certain periods. With MiningMath’s technology, there isn’t such a division. MiningMath navigates through the solution space by using surfaces that will never result in split benches, leading to a more precise optimization.

Modern approaches present a difference between the optimization input parameters for OSA (Overall Slope Angle) and what is measured from output pit shells, due to the use of the “block precedence” methodology. MiningMath works with “surface-constrained production scheduling” instead. It defines surfaces that describe the group of blocks that should be mined, or not, considering productions required, and points that could be placed anywhere along the Z-axis. This flexibility allows the elevation to be above, below, or matching a block’s centroid, which ensures that MiningMath’s algorithm can control the OSA precisely, with no errors that could have a strong impact on transition zones.

Example

Setting up the Super Best Case is simple. There are only two necessary restrictions:

  1. Processing capacity: 10 Mt per year.

  2. Timeframe: Years (1).

Depending on your block model, additional parameters may need to be specified. For example, if you have multiple destinations these could be added for proper destination optimization. The figure below provides a comprehensive overview, highlighting the essential parameters required for running the Super Best Case scenario using the pre-installed Marvin dataset.

Results

Results can be analysed in the Viewer tab and the exported report file. For the pre-installed Marvin dataset, note how the sequencing has no gap problems, and the production is kept close to the limit without without violating any restrictions.

Super Best Case Sequencing
Sequencing Slice
Super Best Case production tonnages

Export files

The block periods and destinations optimized by MiningMath’s Super Best Case (or any other scenario) can be exported in a CSV format. You could use these results to import back into your preferred mining package, for comparison, pushback design or scheduling purposes. Export options are depicted below.

Adding constraints

A refinement of the super best case could be done by adding more constraints, preferably one at the time to evaluate each impact in “reserves”, potential conflicts between them, and so on. You can try to follow the suggestions below for this improvement:

Optimized Pushbacks

MiningMath offers the option of producing optimized, single-step pushbacks with controlled ore production and operational designs. This procedure is important to ensure the financial and operational viability of the mining project, as excessively large volumes can render the project unfeasible, while excessively small volumes can result in resource wastage or missed opportunities for ore extraction.

By testing different volumes, it is possible to find an optimal point that maximizes the net present value (NPV) of the project.

MiningMath's single-step methodology to generate a diverse range of pushbacks straight from the block model.

How does it work?

MiningMath utilizes timeframes to generate pushbacks at different levels of detail. Timeframes are time intervals that divide the mine’s lifespan into smaller periods. Different timeframes allow users to perform a fast evaluation of the impact of production volume on the NPV. If necessary, adjustments can be made to optimize production and reduce costs.

In Pushback Optimization, multiple optimized pushback scenarios are created with varying levels of detail, enabling users to have a comprehensive view of the impact of volume variations on the project’s performance.

Single-step approach

Every pushback produced with MiningMath is created  in a single-step, straight from block model, taking into consideration geometric constraints such as minimum bottom width and minimum mining width, controlling tonnages, blending and other requirements. 

Multiple single-step pushback scenarios can be created with varying levels of detail, enabling users to have a a higher variety of options and a comprehensive view of the impact of volume variations on the project’s performance.

Create a Pushback

You can Identify timeframe intervals in your project, so that you can work with group periods before getting into a detailed insight. This strategy allows you to run the scenarios faster without losing flexibility or adding dilution for the optimization, which happens when we reblock.

The idea is to make each optimized period represent biennial, triennial, or decennial plans. MiningMath allows you to do it easily by simply adjusting some constraints to fit with the timeframe selected. Notice that in this example, the processing was not fully achieved, and this kind of approach helps us to understand which constraints are interfering the most in the results.

Example

Property Value
Timeframe custom factor
5
Processing capacity
50Mt in 5 years
Dump capacty
150Mt in 5 years
Vertical rate of advance

 750 m in 5 years

Minimum Mining Width
100m
Minimum Bottom Width
100m
Restrict Mining Surface
Optional
Grade copper
0.88%
Stockpiling parameters
On

Note: Waste control and vertical rate of advance are not recommended if you are just looking for pushback shapes.

Work Through Different Timeframes

Given the previous initial scenario, you might want to consider different timeframes for your pushback design. In order to perform a Pushback Optimization, the timeframes (in green), process and dump production limits (in green) and the vertical rate (in red) will be adjusted.

By varying the highlighted parameters above, the following decision tree has been constructed for Pushback Optimization.

Three different timeframes are explored: 3 years, 5 years, and 10 years. Each timeframe is associated with specific process and dump production limits. Such limits not only scale with their respective timeframes but also allow for variations that provide flexibility for testing different production scenarios. Finally, the vertical rate is also adjusted to align with the defined timeframe of each scenario. For instance, the vertical rate is set to 450m for the 3-year timeframe, 750m for the 5-year timeframe, and 1500m for the 10-year timeframe.

Afterward, specific results were carefully selected for comparison, focusing on key parameters such as Net Present Value (NPV), production process, and production dump.

More details

The 2 constraints inputted at the production tab are related to the maximum material handling allowed: the third one is about the processing equipment capacity, and the vertical rate of advance is related to the depth that could be achieved adjusted to this interval. The minimum mining width was added because we are already generating designed surfaces that could be used later as guidance of detailed schedules, thus, it should respect the parameter due to the equipment sizing. Parameters such as average, minimum bottom and restrict mining surface, don’t suffer any change in the time frames.

It’s important to remember that the packages of time here don’t necessarily have to correspond to identical sets of years. You could propose intervals with different constraints until reaching reasonable/achievable shapes for the design of ramps, for example. If you wish to produce more operational results, easier to design, and closer to real-life operations, try to play with wider mining/bottom widths rates. Those changes will not necessarily reduce the NPV of your project.

Considering this approach the discount rate serves just a rough NPV approximation and it doesn’t affect much the quality of the solution, given that the best materials following the required constraints will be allocated to the first packages anyway.

Remember all the constraints

NPV Upside Potential

NPV Upside Potential is the process of generating and analyzing scenarios to measure the impact of each constraint on the project’s net present value (NPV), from the Super Best Case to a detailed setup. Measuring the impact of each constraint on the NPV is important to assess the financial impact and ensure the project’s viability under different scenarios and conditions. Each constraint can have a significant impact on the project’s NPV, and it is crucial to understand how they affect the project’s financial performance.

By evaluating the impact of each constraint on the project’s NPV, it is possible to identify financial bottlenecks and opportunities for improvement, as well as prioritize problem resolution. This can result in better resource allocation and cost reduction, enhancing the project’s profitability and viability.

In NPV Upside Potential, scenarios are created that sequentially incorporate each constraint of the project, allowing users to have a comprehensive view of the impact of each constraint on the project’s performance. In case more efficiency is needed, the resulting surface obtained on the Constraints Validation or in Best Case refinements could be used as Restrict Mining in the last interval, which might reduce the complexity and the runtime. 

Example

To illustrate this process, let’s consider the base scenario of the Marvin dataset (shown in the figure below). The highlighted green fields represent all the targeted constraints that need to be controlled in this project: process capacity, minimum average grade of CU in process, dump capacity, bottom minimum width, mining minimum width and maximum vertical rate.

The decision tree depicted below has been constructed for a NPV Upside Potential process, based on the above scenario. In this decision tree, the scenarios progressively introduce each constraint into the project.

The target scenario is the last one, with the following restrictions: Process Production=10mt, Dump Production=30mt, Bottom Width=100m, Mining Width=100m, Vertical Rate=150m, and average CU=0.5. However, the constraints are added interactively, starting with the process production, followed by the dump production, widths, and so on.

Note how the cumulative NPV usually decreases (as expected) when more restrictions are added (see note at the end for exceptions). Without this interactive process, there might be a lack of information to understand the NPV of the final, desired scenario.

Best-Worst Range Analysis

Best-Worst Range Analysis is the process of generating and analyzing scenarios to measure the impact of mine width constraints on the project’s net present value (NPV), from no restriction to wide widths. Measuring the impact of mine width constraints is crucial to determine the optimal fleet equipment configuration in mining operations, with the aim of optimizing productivity and maximizing the net present value (NPV) of the project.

By analyzing variations in width constraints, it is possible to identify the effect of space limitations on mining operations and evaluate the influence of different bench widths on fleet performance. Appropriate mining widths can bring a series of benefits: higher amount of material to be simultaneously extracted; higher fleet productivity; more efficient transportation; easier road maintenance and so on. Hence, the search for different widths allows finding the best combination of equipment and mining techniques aimed at maximizing production and profit simultaneously in each scenario.

In a Best-Worst Range Analysis, scenarios are created gradually increasing the mine width up to a feasible maximum, allowing users to have a comprehensive view of the impact of space limitations on the project’s performance.

Considering the nature of global optimization and the non-linearity of the problem, it is expected that there will be variations in performance (NPV, production, amount of mine fronts, etc.) as parameter values are modified. Therefore, it is crucial to generate a large number of scenarios to obtain a comprehensive analysis of the impact of these variations on the project. This way, a more precise understanding of how different parameter values affect the overall performance can be achieved.

Example

Consider the following base scenario and decision tree built for a Best-Worst Range Analysis using the Marvin dataset.

The goal in this case is to understand the impact of different values of mining width (in green), which will be tested with a range of different values, from 0m up to 200m. 

Note that there is no linear relationship between mining width and NPV. In other words, a higher mining width does not necessarily imply a lower NPV. As previously mentioned, that is due to the non-linearity of the problem.

Considering the nature of global optimization employed in MiningMath, other variables might also be affected by different mining widths. For example, the production could be analyzed for identification of possible issues when employing different mining widths.

Selectivity Analysis

Selectivity Analysis is the process of generating and analyzing scenarios to measure the impact of all geometric constraints on the project’s net present value (NPV), from the most selective to the least selective setup. Analyzing the impact of variations in geometric constraints is important to determine the optimal mine configuration and optimize productivity and profit.

By performing such an analysis, it is possible to identify the effect of geometric limitations on mining operations. Moreover, it is possible to evaluate the influence of each parameter and its variation on mine performance. This allows finding the best combination of parameters and mining techniques aimed at maximizing production and profit for each scenario.

In a Selectivity Analysis, scenarios are created including each geometric constraint sequentially and gradually increasing or decreasing their values from the least selective until the desirable requirement. This allows users to have a comprehensive view of the impact of geometric limitations on the project’s performance.

Considering the nature of global optimization and the non-linearity of the problem, it is expected that there will be variations in performance  (NPV, production, amount of mine fronts, etc.) as parameter values are modified. Therefore, it is crucial to generate a large number of scenarios to perform a comprehensive analysis of the impact of these variations on the project.

Example

Consider the following base scenario and decision tree built for a Selectivity Analysis using the Marvin dataset.

The goal is to understand the impact of different values of geometric constraints (mining width, bottom width, and vertical rate of advance). The geometric parameters (in green) will be tested with a range of different values: bottom width with values from 0m up to 200m; mining width with values from 0m up to 200m; and vertical rate of advance with values from 50m up to 300m. In this example, 26 different scenarios were evaluated. 

Note that there is no linear relationship between geometric constraints and NPV. In other words, a higher width or lower vertical rate of advance do not necessarily imply a lower NPV. As previously mentioned, that is due to the non-linearity of the problem. The cumulative NPV of the scenarios is compared in the graph below.

A diverse range of results can be achieved with a Selectivity Analysis. However, there are usually two possibilities when they are compared:

  • Contrasting geometrics parameters with small NPV variations: note that when the bottom width changes from 0m to 80m, and the remaining parameters are fixed, the NPV drops from 454 M$ to 444M$. This indicates that large changes in geometric constraints do not necessarily lead to large changes in the NPV. The same for the scenarios SA_BW000_MW100_VR150 and SA_BW100_MW100_VR250.

  • Similar geometric parameters with larger NPV variations: when comparing scenarios SA_BW080_MW100_VR150 and SA_BW100_MW160_VR150 there is a drop in NPV from 444M$ to 370M$, highlighting that the 20m and 60m change in bottom width and mining width respectively can lead to a larger NPV difference in the project.

In conclusion, it is important to create several scenarios in a Selectivity Analysis. As exemplified above, results can be quite similar or quite different due to the non-linearity of the problem. Considering the nature of global optimization employed in MiningMath, it is also important to evaluate other indicators. The figures below depict the tonnage achieved for the production, demonstrating the possible impacts for different geometric constraints.

Design Enhancement

Design Enhancement is the process of creating scenarios to conduct extensive searches for solutions with similar NPV values but with fewer violations and improved shapes. This process allows finding more efficient and sustainable solutions that meet specific mine constraints and needs. Hence, seeking these kinds of scenarios is important for optimizing mining operations and for reducing risks and costs. 

In Design Enhancement, scenarios are created with more rigorous geometric constraints without compromising the desirable requirements. The goal is  to reduce violations and find better forms for the project. This is possible due to the global nature of optimization and the non-linearity of the problem, enabling the use of stricter requirements for the geometric constraints that could lead to a better  performance of the project.

Example

Consider the initial scenario and respective decision tree built for a Design Enhancement process. The goal is to evaluate stricter variations in the geometric constraints (in green).

Note the variation of results for Cumulative NPV and production. The base scenario has a small violation on dump production for period 1. However, when modifying the minimum width to 120m (scenario DE_BW100_MW120_VR150) this violation is not present anymore. Hence, this is an example of how small variations in geometric constraints could lead to less violations.

NPV Enhancement

NPV Enhancement is the process of creating scenarios to conduct extensive searches for solutions with higher net present value (NPV) values and similar violations, while considering minimum requirements for project constraints. Scenarios are created that gradually modify constraints from desirable requirements to minimum requirements, with the goal of increasing project profitability.

Example

Consider the initial scenario and respective decision tree built for a NPV Enhancement process. The constraints in green (production capacities, geometric constraints, and average CU) are considered for modifications, from desirable requirements to minimum requirements, in order to identify solutions with higher net present value (NPV).

Results show a high variation in NPV while the production remains in its limits. Hence, it shows that it is possible to achieve higher NPVs when employing minimum requirements defined by the user.

Bottleneck Analysis

Bottleneck Analysis involves generating scenarios to conduct extensive searches for solutions with fewer violations while preserving NPV, keeping geometries, optimizing mining operations, and reducing risks. This allows for the discovery of more efficient and sustainable solutions that meet specific constraints and needs of the mine.

In Bottleneck Analysis, after analyzing a desirable scenario it is possible to identify the constraint/s with demanding requirements that directly impact the optimization results and cause significant violation issues. Then, scenarios should be created by relaxing these demanding parameters, enabling users to make decisions to mitigate risks and ensure project viability.

Example

Consider the base scenario overview and the respective report on the dump production. Note how the first period has violated the 30Mt constraint.

A Bottleneck Analysis can help us identifying the constraint/s with demanding requirements that directly impact the optimization results and cause the violation in the dump production. Four different scenarios are built using a decision tree to analyze different values for dump production limits, minimum average of CU and vertical rate of advance.

NoteTo decide which parameters need to be changed, you can consider the contraint priority order that MiningMath employs in order to always deliver a solution. However, adjustments usually depend on the unique characteristics of each project and the flexibility available to modify its requirements.

The graphs below depict a comparative analysis of the results for the scenarios in the decision tree.

This analysis shows that the minimum average constraint of CU, production dump and vertical rate of advance are all restricting the base scenario. When relaxing these parameters, there is an increase of approximately 5% in the cumulative NPV, while the dump productions are kept within their limits and the process productions are closer to the target for some scenarios.

Multivariate Sensitivity Analysis

Multivariate Sensitivity Analysis is the process of creating and analyzing scenarios based on a range of possible values for selected constraints. Analyzing the impact of constraints variations is important for determining the optimal mine configuration and for optimizing productivity and profitability.

In Multivariate Sensitivity Analysis, scenarios are created gradually increasing or gradually decreasing the values of the constraints within a desired range, covering all combinations of values. This allows users to have a comprehensive view of the impact of combinations of constraint values on the project’s performance.

Considering the nature of global optimization and the nonlinearity of the problem, it is expected that there will be variations in performance as certain parameter values are modified. Therefore, it is crucial to generate a large number of scenarios to obtain a comprehensive analysis of the impact of these variations on the project.

Example

Consider the base scenario overview and the respective decision tree built for a Multivariate Sensitivity Analysis depicted below.

Results are depicted below. Note how a diverse range of cumulative NPVs is reported when compared to the Base scenario. Also not how certain productions are more stable than others, demonstrating the importance of performing a Multivariate Sensitivity Analysis.

Optimized Schedules

MiningMath software allows mining engineers to improve their strategic analysis through risk assessments that are unconstrained by a step-wise approach to optimization. MiningMath’s global mining optimization methodology helps to integrate multiple areas of the business. It handles all parameters simultaneously, delivering multiple scenarios and accounting for both strategic and tactical aspects.

MiningMath's single-step methodology to generate a diverse range of schedules with short-term integration straight from the block model.

Run your first project

You can check a sequence of pages to learn how to run your first project with our Getting Started training. From installation process and formatting your model files up to the long-term planning of your project.

Hundreds of unseen distinctive solutions

MiningMath provides different views and solutions for each parameter changed and each possible objective on the same mine. Search through our extensive set of workflows to improve your projects and generate optimized schedules.

Do not over constrain

When using single-step methodology, it is important not to target infeasible results. MiningMath provides a diverse range of workflows that can help you understand and optimize your project. For example:

Short-term Planning

MiningMath allows the integration between long and short-term. By running the Best Casesurfaces to guide the optimization were generated and they could be used as a guide based on the NPV upper bound. The Exploratory Analysis provides insights on what could be the challenges of our project and also operational designs that could be used in further steps. At last, we obtained a detailed Schedule by using, or not, a surface, which could be the final pit or any intermediary one, as a guide.

Considering this workflow, now you may have enough information on a reasonable long-term view to enhance the adherence/reconciliation of your plans. You could choose a surface and use it as force and restrict mining to refine everything inside it. Remember that Force Mining is responsible for making the mining achieve at least the surface inserted, which means that all the material inside its limits should be extracted, respecting the slope angles, while Restrict Mining aims to prohibit the area below the surface inserted to be mined until the period in which it has been applied.

Thus, MiningMath will reach this exact surface in the time-frame required and enable you to test different geometries, blending constraints, and any other variable that could be required in the short-term planning without interfering in the long-term overview. Additional helpful features in these refinements are the concepts of mining fronts and the design optimization, based on surfaces modification, that could be done respecting all the parameters and generating results accordingly with your needs.

Figure 1: Results generated using different helpful features.

Example

Parameters Value
Timeframe
Custom factor (0.5)
Processing capacity
5 Mt per semester
Total movement
20 Mt per semester
Vertical rate of advance
60m per semester
Minimum Mining
120m
Bottom width
100m
Force and Restrict Mining Surface
Surface005 from Schedule Optimization
Stockpiling parameters
On
Play with steeper slope angles in the short term?
Yes

Table 1: Set of constraints example (1).

Results examples

Further details

The example above used fewer constraints, geometries were changed and the average grade was let free. It is very helpful to define the early years based on a semester timeframe, which can assist you to manage stocks and any other variables in the firsts 3 years, for instance. Note that the period ranges on MiningMath are based on the timeframe selected, therefore, you should adjust your variables accordingly with this value.

When we use Force+Restrict, we are telling the optimizer to break this volume into pieces and that it must mine this volume entirely, even if it is waste, so that the long-term view is respected. This way, you keep regarding the whole deposit while deciding what to do in the first periods. The approach here is quite different than a set of Revenue Factors for a series of LG/Pseudoflow runs, followed by adjustments to find pushbacks without math optimization criteria. It is worth mentioning that this kind of suggestion must be only applied at the beginning or at the end of the life of mine, since Force+Restrict Mining surfaces used in intermediate periods could interfere directly with the results.

Using timeframes

Another strategy is to optimize the short-term along with the long-term using different timeframes. In this approach, the integration between the short and long term visions is made in the same optimization process, facilitating the analysis and strategic definitions.

It is possible to consider:

  • shorter time horizons (weeks, months, quarters...) for the initial periods of the operation;

  • annual plans as far as needed, for a precise definition of discounted cash flow;

  • less detail for longer time horizons. They need to be considered in the overall view of the mine, up to exhaustion, but they consume optimization processing time that can be more focused on the early years of operation.

Thus, there is value maximization at the strategic level, and feasibility at the tactical level simultaneously. In addition, there’s minimizing compliance and reconciliation problems, as well as improving communication between teams, by working in an integrated system.

On this strategy, each period range will represent the time interval chosen in the timeframe, and discount rate will be adjusted in alignment with the time interval choice. Other constraints such as production and vertical rate of advance (VRA) must be adjusted to match each interval on the period ranges.

In order to clarify this strategy, Table 2 and Figure 9 present a possible list of constraints for an example using timeframes:

Table 2: Set of constraints for a timeframe example.

Constraints chosen in the interface for a timeframe example.

Multi-mine

MiningMath’s global optimization algorithm effectively addresses the challenges of integrated multi-mine projects by considering all pits simultaneously. Unlike individual pit optimization, this approach delivers a comprehensive solution that optimizes the entire project, providing a more cohesive and strategic overview.

Multiple pits projects

Formatting the block model

For multi-mine projects, the block model must include all mining regions for simultaneous optimization. If your pits are mapped in separate datasets, it’s essential to follow the steps outlined below:

  1. Work with a single block model or single pit first, run the initial tests and understand this region before handling the block model modification.

  2. Try to eliminate meaningless blocks, which would not affect the solution and could increase complexity.

  3. Add a second model or pit to explore the process of working with multi-mine projects. This combined block model file should meet the same requirements as a single model, as outlined on the data formatting page, ensuring unified characteristics.

    Experiment with surface adjustments to refine results, filter out regions you don’t wish to mine, and apply other guidance as needed. Since MiningMath surface files maintain a consistent order, using an Excel file (available here) can be a helpful tool for these modifications.

    Use mining fronts if you’d like to control the material extracted from each region.

  4. Add the other regions and start using everything that you wish.

Geometric constraints

The current version of MiningMath applies the same values for vertical rate, bottom width, and mining width across the entire block model. However, in a multi-pit scenario, each pit may have unique geometric parameters that impact selectivity. In these cases, we recommend setting the parameters for one pit, fixing its solutions (as the force and restrict mining settings have the highest priority), and then starting the optimization of the other pits. This approach ensures that the optimization considers the mass already planned for extraction from the first pit.

Example workflow

An efficient workflow starts by running an initial scenario without geometric parameters to serve as a validation or best-case scenario for scheduling optimization. Next, configure a scenario using the geometric parameters of the most selective mine—meaning the smallest widths and highest vertical rate (VR)—to create the least constrained scenario in terms of geometry. The surfaces generated from this setup can then be used to fix solutions for Mine 1.

Surface obtained in the first optimization for Mine 1.

For example, you could take Surface 1 and adjust the elevation in other areas to reflect the mass extracted in Period 1 from Mine 1, as well as the potential extraction from the second pit. With these results, you can refine surfaces or mining fronts, conduct a sensitivity analysis of the geometric parameters across multiple projects, and still maintain the benefits of global optimization.

Surfaces setup

Sustainable analysis

Technology has been developed to incorporate social and environmental factors in the mining project optimization, assessing these impacts whilst maximizing its net present value (NPV). The method can quantify socio-environmental aspects, such as dust, noise, avoidance of springs/caves/tribes, carbon emissions, water consumption, and any parameter that could be controlled by its average or sum. These environmental and social aspects can be assessed following internationally recognized standards (ISO 14044).

Figure 1: social and environmental factors.

Minviro in partnership with MiningMath has developed an approach to integrate such quantitative assessment into strategic mining optimization. This enables socio-environmental impacts to be constrained in the mining optimization, and the economic cost of reducing them to be calculated as a consequence. The way to do it is by inserting these variables linked with each block of your model, following these instructions. Considering this methodology, published here, significant reductions in the global warming impact could be achieved with a small economic cost. For example, using an environmental constraint it was possible to reduce 8.1% of ‎CO2 emission whilst achieving 95.9% of the net present value compared to the baseline, as you can see in the image bellow.

Figure 2: Reduction in enviromental impacts.

Several scenarios for mine development, processing setup, energy/water consumption, CAPEX (content in Spanish), OPEX etc. can be evaluated . It is also possible to include geometric constraints in order to restrict a mining area due to legal and site-specific issues that affecs the local population, using this feature. Spatially and temporally explicit socio-environmental risks can be included in mining optimization, providing an opportunity to assess alternative project options or explore a socio-environmental cost benefit analysis. For each aspect considered, decision makers are able to propose a range of possible scenarios and assess the economic cost of constraining these to different levels.

Figure 3: Possible scenarios to assess the economic cost of constraints.

The decision-making board, which previously had access to one or a few scenarios, now has a cloud of possibilities optimized and integrated with the technical and economic aspects of the project, reducing risks and adding sustainable value. The mathematical intelligence behind it is based on modern and well-accepted Data Science and Optimization concepts academically proven. The methodology has been tested in real mining projects with gains in NPV ranging between 15% and 20% on average, where socio-environmental aspects haven’t been added yet.

Figure 4: Performance over time.

Uncertainties at the Beginning

One of the many possibilities offered by MiningMath’s approach is to have multiple overview scenarios to evaluate different project assumptions, before doing a more detailed work. It does not demand an arbitrary/automated trial-and-error cutoff definition, nor a fixed input in form of pushbacks that will guide further optimization steps within the boundaries of a simplified problem. A subtle but substantial implication is the possibility of seeing a totally different mine development throughout the mine life cycle for each project assumption change. This allows mine managers to have a clearer view of the decision-tree and the possibilities on their hands, to improve economic, technical, and socio-environmental performances.

Considering this context, mine managers can judge greenfield projects to know whether or not they should prioritize a geotechnical study. This could be done by running multiple scenarios, considering the expected variability for slope angles for a given deposit. For example, in a given deposit, benchmarks from similar deposits indicate the overall slope angles might vary between 35-45 degrees. Before reaching the conclusion using an in-depth geotechnical study, multiple scenarios can be used to estimate the economic impact of each possible assumption for the overall slope angle. The conclusion might, then indicate a low economic impact, that could postpone the need for a detailed study.

The same idea applies to any parameter, which ultimately represents a project assumption.

MiningMath conducted an illustrative example with 2000 simulations varying multiple parameters independently. The results produced the chart from Figure 1, showing the probability (Y-axis) and the Project’s Value (X-axis). In this case, a detailed geotechnical study might be postponed, as the Project’s Value varies between 700 to 1100 MU$, in function of the OSA.

Figure 1: What would 2000 simulations say about NPV distributions?

Theory

Current Best Practices

MiningMath software allows mining engineers to improve their strategic analysis through risk assessments performed in a single-step approach to optimization. In other words, MiningMath’s global mining optimization methodology helps to integrate multiple areas of the business. It handles all parameters simultaneously, delivering multiple scenarios and accounting for both strategic and tactical aspects.

Hence, it is important to understand other current best practices employing a stepwise rationale and their disadvantages compared to MiningMath’s single-step approach.

Stepwise technologies

The mining planning models built with current best practices have developed shortcuts and approximations to try to deliver acceptable results that consider all the project’s complexities and constraints. To handle it, powerful machines are required to find a solution and to simultaneously determine the optimum pit limit and mining sequence that deliver the maximum project value.

Figure 1 depicts a stepwise approach used by current best practices.

Figure 1: Current best practices: stepwise approach

Stages of stepwise approaches

These steps may include different strategies, technologies or algorithms. However, they are all usually solved individually in three larger stages:

  1. Nested pits: when finding nested pits it is possible to employ the Lerchs-Grossmann (LG) algorithm, the Pseudoflow algorithm, destination optimization, direct block scheduling, or even more recent  heuristic mechanisms.
  2. Pushback definition: having the nested pits defined, the next step would usually be to perform the definition of pushbacks in a manual way by some expert mine planning engineers using a number of empirical rules.  Automatic ways focused on NPV optimization could also be employed for pushback design, but these are usually under resource constraints and do not consider enough geometric requirements.
  3. Schedules: finally, starting from a chosen pushback, the scheduling is performed. A myriad of techniques can be employed for that, such as direct block scheduling, genetic algorithms, (fuzzy) clustering algorithms, dynamic programming, and heuristic methods in general. All with different rates of success, but limited variety of solutions due to the single pushback input.

Aim of stepwise approaches

Regardless of the technologies or algorithms, in a stepwise approach the aim is to initially find the final pit limit that maximizes the undiscounted cash flow to then focus on block sequence within this final pit envelope. By constraining the problem and predefining inputs, these shortcuts (approximations) help to save time and computer resources, enabling such software to consider complexities such as ore blending requirements, different processing routes, stockpiling policy, truck fleet considerations, and so on.

Disadvantages of stepwise approaches

With current best practices employing some stepwise approach, thousands of potential schedules can be generated with a multitude of different methods, but they are all based on the same stepwise rationale, with one step guiding the other. Commonly, schedules follow from a set of nested pits and other fixed input parameters such as geotechnics, metallurgical performance, blending constraints, etc. Therefore, the results frequently present similar behaviours and restrict the full exploration of the solution space.

MiningMath Uniqueness

MiningMath allows mining managers to improve their strategic analysis through risk assessments that are unconstrained by stepwise processes. Through math optimization models that integrate multiple areas of the business, MiningMath handles all parameters simultaneously and delivers multiple scenarios, accounting for both strategic and tactical aspects.

MiningMath optimization is not constrained by arbitrary decisions for cut-off grades or pushbacks, since these decisions are usually guided by prior knowledge or automated trial-and-error. Thus, each set of constraints in our technology has the potential to deliver an entirely new project development, including economic, technical, and socio-environmental indicators, along with a mine schedule, while aiming to maximize the project’s NPV.

How can it be used?

MiningMath acknowledges that each project has its own characteristics. Thus, it also allows you to choose which workflow fits best in your demand and decide which one should be used. Straight from block model you can find solutions to your short-term, schedules, optimized pushbacks or super best case, as depicted in Figure 1.

Figure 1: Single-step approach employed in MiningMath. Straight from block model to short-term, schedules, optimized pushbacks or super best case.

Super best case

As MiningMath optimizes all periods simultaneously, without the need for revenue factors, it has the potential to find higher best case’s NPVs than traditional best case procedures based on LG/Pseudoflow nested pits, which do not account for processing capacities (gap problems), cutoff policy optimization and discount rate. Usually, these, and many other, real-life aspects are only accounted for later, through a stepwise process, limiting the potentials of the project. 

Discounted Cash flow x Undiscounted Cash flow

The use of LG/Pseudoflow methods to perform pit optimization aims to maximize the undiscounted cash flow of the project. On the other hand, MiningMath maximizes the discounted cash flow. Therefore, regions in which MiningMath has decided not to mine are, probably, regions where you have to pay for removing waste on the earlier periods, but the profit obtained by the discounted revenue from the hidden ore does not pay for the extraction.

A proper comparison between this methodology could be done if you import the final pit surface obtained from the other mining package into MiningMath, and use it as Force/Restrict mining. This way, MiningMath will do the schedule optimization using the exact same surface, which will allow you to compare the NPV for each case. Figures 2 and 3 depict two comparisons between undiscounted and discounted cashflows.

Example of discounted and undiscounted cashflow
Figure 2: Undiscounted versus discounted cash flow optimization.
Figure 3: Undiscounted versus discounted cash flow optimization regarding a minimum mining width.

Pushbacks

MiningMath offers the option of producing Optimized Pushbacks with controlled ore production and operational designs to guide your mine sequencing. Having this broader view in mind, you are already able to begin the scheduling stage. The block periods and destinations optimized by MiningMath could be imported back into your preferred mining package, for comparison, pushback design or scheduling purposes.

Schedules

When using MiningMath, it is possible to define the pit limit and mine schedule simultaneously. That is, to determine which blocks should be mined, when this should happen and to where they should be sent to maximize the NPV, while respecting production and operational constraints, slope angles, discount rate, stockpiles, among others, all performed straight from the block model. This means that the steps of pit optimization, pushback and scheduling are not obtained separately, but in a single and optimized process.

Decision Trees

To help with all that, our software allows you to build Decision Trees which enable a broader view of your project and a deeper understanding of the impacts of each variable. This is all possible because MiningMath works with a global optimization which simultaneously regards all variables, instead of using a step-wise approach. The software provides different views and solutions for the same mine for each parameter changed and each possible objective. 

Guaranteed Solutions

Multiple, complex constraints increase the likelihood of not finding or not existing feasible solutions. Nonetheless, MiningMath always delivers a solution, even if it could not honor the entire set of constraints imposed or had to reduce the NPV to find a feasible solution.

When dealing with highly constrained problems, other technologies might take hours or days to realize there is no feasible solution. The reason for that is because they usually employ generic optimization algorithms, not suitable to take decisions in a mining problem. In this case, the only option is to prepare a second execution with more flexible constraints, but still with no guarantee of feasibility.

On MiningMath, once an unfeasible solution is detected, the algorithm takes decisions on which (less relevant) constraints should be flexible, returning some warnings to the user at the report. This is performed along the optimization process, without compromising the runtimes. 

In some cases, the set of constraints may be too limiting, and the software is unable to return a solution, generating the “Unfeasible project” message. In these cases, it is recommended that you relax some restrictions.

The constraints priority order, from the highest to the lowest, is depicted in Figure 1.

  1. Force+Restrict Mining together using the same surface.

  2. Slope Angles.

  3. Force mining or Restrict Mining, same concept as above, but the surfaces here are corrected according to slopes and it might have some differences.

  4. Minimum Bottom and Mining width, mining length.

  5. Total Production Capacity (or the sum of the capacities across all destinations.)

  6. Vertical rate of advance.

  7. Average and Sum, modeled as strong penalties in the objective function

  8. Time limit

  9. Improve the NPV

Figure 1: Constraints hierarchy order.

Theory Validation

MiningMath’s results are only possible due to its proprietary Math Programming Solver ©. It consists of a Mixed Integer Linear Programming (MILP) formulation and linearization methods that tackle the challenging non-linear aspects of the mining optimization. In addition, it has its own Branch & Cut algorithm, which provides more efficiency than standard MILP optimizers since it’s fine tuned to this specific optimization problem. 

Another major advantage of MiningMath comes from the mathematical formulations based on surfaces (Goodwin et al., 2006; Marinho, 2013), instead of usual block precedences. Block precedence methods might lead to higher errors (Beretta and Marinho, 2014), providing slopes steeper (i.e. riskier, more optimistic) than requested. The use of surfaces eliminates these geotechnical errors and  allows for block-by-block geotechnical zones, if needed.

These surface-based formulations allow MiningMath to include geometric constraints, and, consequently, find solutions that are closer to real mining operations. The user can guide geometries by including mining and bottom widths, mining lengths, maximum vertical advance rates, and forcing/restricting mining areas. You can better understand how each constraint interacts with all others here. Such constraints give freedom to the user to work, or not, with predefined cut-offs and pushbacks which might limit the space of potential solutions. An in-depth view of MiningMath’s formulations and algorithm can also be seen here.

This approach (Figure 1) has been applied for years by clients, such as Vale, Rio Tinto, Codelco, Kinross, AMSA and MMG, with a growing number of licenses sold, press releases and academic research also proving the consistency of the implementation. With constant developments since 2013, MiningMath has reached a mature and robust state. It is the first and only singlestep mining optimization engine available in the market.

Figure 1: MiningMath’s approach. From block model to schedule in a single step solved by its proprietary Math Programming Solver ©.

Mining Optimization Algorithm

MiningMath has a flexible mining optimization algorithm that consists of a Mixed Integer Linear Programming (MILP) formulation and linearization methods that tackle the challenging non-linear aspects of the problem. In addition, MiningMath has its own Branch & Cut algorithm, which provides more efficiency than standard MILP optimizers since it’s fine tuned to this specific optimization problem.

One of the major advantages of MiningMath comes from the mathematical formulations based on surfaces (Goodwin et. al. (2006), Marinho (2013)) instead of usual block precedences. Block precedence methods might lead to higher errors (Beretta and Marinho (2015)), providing slopes steeper (i.e. riskier, more optimistic) than requested. The use of surfaces eliminates these geotechnical errors and allows for block-by-block geotechnical zones, if needed.

Another crucial advantage is that MiningMath’s formulation includes geometric constraints, allowing its algorithm to find solutions that are closer to real mining operations. The user can guide geometries by including mining and bottom widths, maximum vertical advance rates, and forcing/restricting mining areas. Such constraints give freedom to the user to work, or not, with predefined cut-offs and pushbacks which might limit the space of potential solutions. Hence, the software provides different views and solutions for the same mine for each parameter changed.

Eventually, linear solutions need to be mapped onto an approximate integer (block-by-block) solution that will represent the scheduling of the mining problem in the real-world. The intelligence to convert continuous solutions into integer and non-linear ones are made by MiningMath’s Branch & Cut algorithm.

algorithm mining optimization
Summary steps of MiningMath algorithm

Algorithm’s flowchart and mathematical formulation

MiningMath employs an innovative mathematical formulation and powerful proprietary Branch & Cut algorithm for mining optimization problems. A description of this mathematical formulation and the three main steps of the algorithm employed are given below.

Step 1: Initial assessment

Figure 1: Initial assessment of entire block model and inclusion of likely profitable blocks within an initial surface.

The first step of the mining optimization algorithm is to remove regions that do not add any value to the project. This is an initial assessment that considers slope constraints, reducing the size of the problem and providing a region of interest for the optimization process. Since MiningMath always employs surfaces in its mathematical formulations, this first set of likely profitable blocks are contained within an initial surface as depicted in Figure 1.

Step 2: Problem linearization and mining optimization

Solution provided by mining optimization algorithm
Figure 2: Example solution with geometric constraints

In the second step of the mining optimization algorithm, the non-linear, integer problem is approximated to an integer, linear one based on surfaces.  For that, it is necessary first to define the common notation across the problem and its variables.

  • [latex]S[/latex]: number of simulated orebody models considered
  • [latex]s[/latex]: simulation index, [latex]s = 1,...,S [/latex]
  • [latex]D[/latex]: number of destinations
  • [latex]d[/latex]: destination index, [latex]d = 1,...,D [/latex]
  • [latex]Z[/latex]: number of levels in the orebody model
  • [latex]z[/latex]: level index, [latex] z = 1,...,Z [/latex]
  • [latex]T[/latex]: number of periods over which the orebody is being scheduled and also defines the number of surfaces considered
  • [latex]t[/latex]: period index, [latex]t = 1,...,T. [/latex]
  • [latex]M[/latex]: number of cells in each surface; where [latex]M = x \times y[/latex] represents the number of mining blocks in x and y dimensions.
  • [latex]c[/latex]: cell index, [latex]c = 1, \ldots ,M[/latex].
  • [latex]G[/latex]: number of unique destination groups defined. Each group might contain 1, all, or any combination of destinations.
  • [latex]g[/latex]: group index, [latex]g = 1, \ldots ,G[/latex].
  • [latex]x_{c,t,d}^{z}[/latex]: simulation-independent binary variable that assumes 1 if block [latex](c, z)[/latex] is being mined in period [latex]t[/latex] and sent to destination [latex]d[/latex], and 0 otherwise.
  • [latex]e_{c,t}[/latex]: simulation-independent continuous variables associated with each cell [latex]c[/latex] for each period [latex]t[/latex], representing cell elevations.
  • [latex]\overline{f_{t,g,s}},\underline{f_{t,g,s}}[/latex]: continuous variables to penalize sum constraints violated for each period, group of destinations, and simulation. One pair of variables is necessary for each quantifiable parameter modeled block by block whose sum is being constrained. An example would be variables used to control fleet hours spent in different periods, groups of destinations, and simulations. More information about possible parameters here. Note also that the software allows the control of the average of simulations, instead of dealing with each simulation individually, and the control by the sum of destinations, instead of each destination individually.
  • [latex]\overline{\alpha_{t,g}},\underline{\alpha_{t,g}}[/latex]: user defined weights for variables [latex]\overline{f_{t,g,s}},\underline{f_{t,g,s}}[/latex] with the same destination group [latex]g[/latex] and period [latex]t[/latex]. These can only be defined in the .ssscn files.
  • [latex]\overline{j_{t,g,s}},\underline{j_{t,g,s}}[/latex]: continuous variables to penalize average constraints violated for each period, destination, and simulation. One pair of variables is necessary for each quantifiable parameter modeled block by block whose average is being constrained. An example would be variables used to control the average grade of blocks mined in different periods, destination groups, and simulations. More information about possible parameters here. Note also that the software allows the control of the average of simulations, instead of dealing with each simulation individually, and the control by the sum of destinations, instead of each destination individually.
  • [latex]\overline{\beta_{t,g}},\underline{\beta_{t,g}}[/latex]: user defined weights for variables [latex]\overline{j_{t,g,s}},\underline{j_{t,g,s}}[/latex] with the same destination [latex]d[/latex] and period [latex]t[/latex]. These can only be defined in the .ssscn files.
  • [latex]e_{c,t} \in \mathbb{R},\,\, [/latex]  [latex]t = 1,...,T[/latex],[latex]c=1,...,M[/latex]
  • [latex]x_{c,t,d}^{z} \in \{0,1\},\,\, [/latex]  [latex]c=1,...,M[/latex], [latex]t = 1,...,T[/latex], [latex]z=1,...,Z[/latex], [latex]d=1,...,D[/latex]
  • [latex]\overline{f_{t,d,s}},\underline{f_{t,d,s}} \in \mathbb{R_{\geq 0}}[/latex], [latex]t = 1,...,T[/latex], [latex]s=1,...,S[/latex], [latex]d=1,...,D[/latex]
  • [latex]\overline{j_{t,d,s}},\underline{j_{t,d,s}} \in \mathbb{R_{\geq 0}}[/latex], [latex]t = 1,...,T[/latex], [latex]s=1,...,S[/latex], [latex]d=1,...,D[/latex]

Having the set of variables defined, is possible now to define a mathematical model with an objective function and necessary constraints. 

Objective function

Intuitive idea

  1. Sum of the economic value of blocks mined per period, destination, and simulation.
  2. Average the result by the number of simulations.
  3. Subtract penalties for certain violated restrictions associated with some user defined parameters.

Requirements

  • \(V_{c,t,d,s}^{z}\): cumulative discounted economic value of block \((c, z)\) in simulation \(s\), period \(t\) and destination \(d\). More about this calculation here.

Formulation

\(max\frac{1}{s}\)\(\sum\limits_{s=1}^{S}\sum\limits_{t=1}^{T}\sum\limits_{c=1}^{M}\sum\limits_{z=1}^{Z}\sum\limits_{d=1}^{D}\)\((V_{c,t,d,s}^{z} x_{c,t,d}^{z}) \,-\, p\)
where
\(p = \sum\limits_{t=1}^{T}\sum\limits_{g=1}^{G}(\overline{\alpha_{t,g}}(\sum\limits_{s=1}^{S}\overline{f_{t,g,s}})\)\( + \underline{\alpha_{t,g}}(\sum\limits_{s=1}^{S}\underline{f_{t,g,s}})\)\( + \overline{\beta_{t,g}}(\sum\limits_{s=1}^{S}\overline{j_{t,g,s}})\)\( + \underline{\beta_{t,g}}(\sum\limits_{s=1}^{S}\underline{j_{t,g,s}})) \)

Finally, the objective function is constrained by the restrictions below

  • [latex]e_{c,t-1} - e_{c,t} \ge 0, c=1,...,M, t=2,...,T [/latex]
[caption id="attachment_14191" align="alignnone" width="5532"]Figure 3: Example of crossing surfaces Figure 3: Two surfaces (blue and yellow): a) not crossing each other and respecting the constraint; b) crossing each other and not respecting the constraint.[/caption]

Intuitive idea

  • Adjacent elevations in a single surface need to respect a maximum difference. This maximum will change based on which direction they are adjacent: x, y, or diagonally.

Requirements

  • [latex]H_x, H_y, H_d[/latex]: maximum difference in elevation for adjacent cells in [latex]x[/latex], [latex]y[/latex] and diagonal directions
  • [latex]X_c, Y_c, D_c[/latex]: equivalent to [latex]H_x, H_y, H_d[/latex]concept, the sets of adjacent cells, laterally in [latex]x[/latex], in [latex]y[/latex], and diagonally, for a given cell [latex]c[/latex], respectively.

Formulation

  • [latex]e_{c,t} - e_{x,t} \ge H_x, c=1,...,M, t=1,...,T, x \in X_c[/latex]
  • [latex]e_{c,t} - e_{y,t} \ge H_y, c=1,...,M, t=1,...,T, y \in Y_c [/latex]
  • [latex]e_{c,t} - e_{d,t} \ge H_d, c=1,...,M, t=1,...,T, d \in D_c [/latex]
[caption id="attachment_14202" align="alignnone" width="5260"]Example of slope constraints in mining optimization algoritm Figure 4: Maximum allowed difference (Hx, Hy, and Hd) in elevation between adjacent cells in contact laterally in the x direction (a), in contact laterally in the y direction (b), and in contact diagonally (c).[/caption]
 

Proprietary constraints not disclosed. Possible examples of constraints of the same type, but not the ones actually employed.

Intuitive idea

  • Surfaces will define when blocks will be mined. For example, blocks between surfaces associated with period 1 and 2, will be mined in period two. A block is between two surfaces if its centroid is between the two surfaces.

Requirements

  • [latex]E_{c}^{z}[/latex]: elevation of centroid for a given block [latex](c, z)[/latex]

Formulation

  • [latex]E_{c}^{z} \times \sum\limits_{d=1}^{D}x_{c,1,d}^{z} \ge e_{c,1}, c=1,...,M,  z=1,...,Z[/latex]
  • [latex]e_{c,t-1} \ge E_{c}^{z} \times \sum\limits_{d=1}^{D}x_{c,t,d}^{z} \ge e_{c,t}, [/latex][latex]c=1,...,M, t=2,...,T, z=1,...,Z[/latex]
[caption id="attachment_31065" align="alignnone" width="3848"]Figure 5: Distance between centroids above surfaces (green lines) and below surfaces (red lines) respecting constraints. Blue blocks are mined in period 1, while yellow blocks are mined in period 2. Figure 5: Distance between centroids above surfaces (green lines) and below surfaces (red lines) respecting constraints. Blue blocks are mined in period 1, while yellow blocks are mined in period 2.[/caption]
Intuitive idea
  • Each mined block can only be sent to one destination.
Formulation
  • [latex]\sum\limits_{d =1}^{D}x_{c,t,d}^{z} = 1, c=1,....,M, t=1,...,T, z = 1,...,Z[/latex]

Intuitive idea

  • For each period and destination groups there’s an upper and lower limit of total tonnage to be extracted. Destination groups might be formed by any unique combination of destinations, with 1, many or all. The sum of the tonnage of mined blocks sent to the same group of destinations in the same period must respect these limits.

Requirements

  • [latex]T_c^z[/latex]: tonnage for a given block [latex](c, z)[/latex].
  • [latex]\overline{T_{t,g}}[/latex]: upper limits in total tonnage to be extracted during period [latex]t[/latex] and destinations in group [latex]g[/latex].
  •  

Formulation

  • [latex] \sum\limits_{c=1}^M\sum\limits_{z=1}^{Z}\sum\limits_{d \in g}T_c^z x_{c,t,d}^{z} \le T_{t,g}, t = 1,...,T, g = 1,..., G[/latex]

Intuitive idea

  • The user can define a certain parameter (i.e. fleet hours spent) associated with each mined block to have its sum controlled. The sum of the values of this parameter associated to each mined block must respect lower and upper bounds for each period, destination groups (optional) and simulation (individually or on average). Destination groups might be formed by any unique combination of destinations, with 1, many or all.

Requirements

  • [latex]\underline{F_{t,g,s}},\overline{F_{t,g,s}}[/latex]: lower and upper limits, respectively, in sum of user defined parameter to be respected in period [latex]t[/latex], destination group [latex]g[/latex], and simulation [latex]s[/latex].
  • [latex]F_{c,d,s}^{z}[/latex]: value of user defined parameter related to a given block [latex](c, z)[/latex] in destination [latex]d[/latex] and simulation [latex]s[/latex].

Formulation

  • [latex]\underline{F_{t,g,s}} \le \sum\limits_{c=1}^M\sum\limits_{z=1}^{Z}\sum\limits_{d \in g}F_{c,d,s}^{z}x_{c,t,d}^{z} + \underline{f_{t,g,s}} - \overline{f_{t,g,s}} \le \overline{F_{t,g,s}},[/latex]

    [latex]t = 1,...,T, g = 1,..., G, s = 1,..., S[/latex]

Intuitive idea

  • The user can define a certain parameter (i.e. grade) associated with each mined block to be controlled in average. This average is weighted by the block’s tonnage and by an optional, user defined weight. It must respect lower and upper bounds for each period, destination group (optional) and simulation (individually or on average). Destination groups might be formed by any unique combination of destinations, with 1, many or all.

Requirements

  • [latex]\underline{J_{t,g,s}},\overline{J_{t,g,s}}[/latex]: lower and upper limits, respectively, for average value of user defined parameter to be respected in period [latex]t[/latex], simulation [latex]s[/latex], and destination group [latex]g[/latex].
  • [latex]T_{c}^{z}[/latex]: tonnage for a given block [latex](c, z)[/latex].
  • [latex]J_{c,s,d}^{z}[/latex]: value of user defined parameter of block [latex](c, z)[/latex] sent to destination [latex]d[/latex] in simulation [latex]s[/latex]
  • [latex]P_{c,t,d,s}^{z}[/latex]: user defined weight for block [latex](c, z)[/latex] in period [latex]t[/latex], destination [latex]d[/latex], and simulation [latex]s[/latex]

Formulation

  • [latex]\underline{J_{t,g,s}} \le[/latex][latex]\frac{\sum\limits_{c=1}^M\sum\limits_{z=1}^Z\sum\limits_{d\in g}P_{c,t,d,s}^{z}T_{c}^{z}J_{c,s,d}^{z}x_{c,t,d}^{z}}{\sum\limits_{c=1}^M\sum\limits_{z=1}^Z\sum\limits_{d\in g}P_{c,t,d,s}^{z}T_{c}^{z}}[/latex][latex] + \underline{j_{t,g,s}} - \overline{j_{t,g,s}} \le \overline{J_{t,g,s}}[/latex]

    [latex]t = 1,...,T, g = 1,..., G, s = 1,..., S[/latex]

Proprietary constraints not disclosed

Intuitive idea

  • Surfaces should respect geometric parameters defined by the user, such as minimum bottom width, minimum mining width, minimum mining length, and maximum vertical rate of advance, as depicted here.

Formulation

  • [latex]Geometric(e_{c,t}) \le \text{geometric restriction}, c=1,...,M, t=1,...,T[/latex]

Step 3: Integer, non-linear solution and evaluation

The next step in the mining optimization algorithm is to convert the linear solution to an integer, non-linear one. MiningMath’s Branch & Cut method is responsible for this conversion. Once it is done, the resulting solution can be evaluated, leading to the end of the algorithm’s execution or to a new optimization process. This new process might be triggered if one of the two situations arise: 

  1. restrictions are violated due to transformation from linear to integer, non-linear solution, or due to problem being infeasible.

  2. an evaluation of certain restrictions in the transformed integer, non-linear solution concluded that they might not affect the problem and be better discarded or modified.

If any of these are true, the solution at this stage will be sent back to Step 2 for linearization and refinement. Thus, if this refinement is caused by situation 1) then the goal is to improve the solution’s feasibility. This feasibility is improved according the constraint hierarchy order depicted in Figure 6.

Figure 6: constraints hierarchy order.

In contrast, if it is caused by situation 2) then the goal is to allow the optimization to focus on the bottlenecks of the problem and improve the current NPV. Once none of these situations have been identified, the current solution is returned. Note that each time the algorithm goes back to Step 2, a new global optimization is performed, thus the new resulting solution might be entirely different. 

Pseudo-code

The whole process of the mining optimization algorithm, from input to output is summarized in the psudo-code below. References are made to previous Steps 1, 2, and 3. This algorithmic flow together with the proposed mathematical formulation exemplifies the innovative methodology applied to solve a single mine scheduling problem.

				
					INPUT: Block model,
       Mining parameters,
       Optional time limit T
OUTPUT: Excel report summarizing the main results of the optimization,
        Outputs of mining optimization, topography, and pit surfaces using   
        .csv format that can also be imported into other mining packages.

EXECUTE initial assessment // Step 1
CREATE problem linearization P // Step 2
SET CURRENT_SOLUTION to empty
SET FEASIBLE_SOLUTION to empty
REPEAT // Step 3
    SOLVE P // Optimization engine + proprietary Branch & Cut algorithm
    SET LS to the integer, linear solution of P
    TRANSFORM LS to an integer, non-linear solution RS
    
    // Evaluate RS
    IF RS has no violated constraints THEN
        SET FEASIBLE_SOLUTION to RS
    ENDIF
        
    IF RS is better than CURRENT_SOLUTION THEN 
        SET CURRENT_SOLUTION to RS
    ENDIF
    
    // Evaluate if new iteration is necessary
    IF FEASIBLE_SOLUTION is empty THEN
        // Step 2 and Figure 6
        CREATE new problem linearization P
               with flexible constraints
        CONTINUE // Go back to loop's start
    ELSE IF T has been reached
        BREAK // Leave loop
    ELSE IF RS has violated constraints that were unviolated in LS OR
          has constraints that can be discarded/modified THEN
        CREATE new problem linearization P // Step 2
        CONTINUE // Go back to loop's start
    ELSE
        BREAK // Leave loop
    ENDIF
WHILE TRUE
EXPORT reports and outputs from CURRENT_SOLUTION

				
			

Evalua