Finding a bottleneck in an enterprise on the example of a distribution system. Methodology for assessing the production capabilities of an enterprise Identification of bottlenecks

There is such statistics: 20% of the code is executed 80% of the time. Its accuracy
hardly fully corresponds to the real state of affairs, but the general meaning
quite interesting: it turns out that the optimization of the entire application is an occupation
ungrateful and stupid, and only the optimization of those
The 20% of apps that run the longest. And to find these 20% is not so
and difficult.

In this article, we will talk about profiling. According to Wikipedia,
profiling is nothing more than "gathering characteristics of a program's operation, such as
as the execution time of individual fragments, the number of correctly predicted conditionals
transitions, the number of cache misses, and so on. "Translated into Russian, this
means "revealing bottlenecks programs" (or, as the Anglophiles say,
"bottlenecks"), namely, all those code sections on which the program
starts to "slip", forcing the user to wait.

The simplest profiling can be done with bare hands (and below I will show
how to do it), but it is better to rely on the community, whose representatives
already created everything necessary tools. The first and most popular tool
is called GNU Profiler (or gprof). It has been used for centuries for
profiling code generated by the GCC compiler. Second - GNU Coverage
testing tool (gcov), a utility for more detailed performance analysis.
The third is a set of debugging and profiling tools under the common name Google
Performance Tools (GPT for short). Well, the fourth is Valgrind, which at least
and is designed to search for errors in working with memory, but contains in its arsenal
a number of utilities for analyzing the performance of programs.

Let's start, as expected, with the classics.

GNU Profiler

GNU Profiler(gprof) is one of the oldest profilers available for
operating systems such as UNIX. It is part of the gcc package and therefore can
be used to profile programs written in any supported
their language (and this is not only C / C ++, but also Objective-C, Ada, Java).

gprof itself is not a profiling tool, it only allows
display profile statistics that are accumulated by the application during
work (it goes without saying that no application does this by default,
but it can start if you build the program with the "-pg" argument).

Let's see how it works in real conditions. To feel everything
virtues of gprof, we will apply it not to some abstract, artificial
created application, but to the real everyday one. Let
it will be gzip.

We get and unpack the sources of the archiver:

$ wget www.gzip.org/gzip-1.3.3.tar.gz
$ tar -xzf gzip-1.3.3.tar.gz
$ cd gzip-1.3.3

Install the tools needed for the build (in Ubuntu this is done
via installing the build-essential meta package):

$ sudo apt-get install build-essential

We start the build configurator by passing the argument in the CFLAGS environment variable
"-pg":

$CFLAGS="-pg" ./configure

Compiling the program:

Now we have a gzip binary capable of keeping statistics of its
execution. Each launch of it will be accompanied by the generation of the gmon.out file:


$ ls -l gmon.out
-rw-r--r-- 1 j1m j1m 24406 2010-11-19 14:47 gmon.out

This file is not meant to be read by humans, but can be used to
generating a detailed performance report:

$ gprof ./gzip gmon.out > gzip-profile.txt

The most important part of the resulting file is shown in the screenshot.

Each row is the execution statistics of one function, the columns are different
indicators. We are interested in the first, third, fourth and seventh columns. They are
display information about the total amount of time spent on execution
functions (the first column - in percent, the third - in seconds), its number
calls and name.

Let's try to analyze the report. First on the list is the deflate function,
which was called only once, but "gobbled up" 29% of the total execution time
programs. This is an implementation of the compression algorithm, and if we had
task to optimize gzip, we should start from there. 22% of the time
took to execute the longest_match function, but, unlike deflate, it was
called as many as 450,613,081 times, so each individual function call took
an insignificant amount of time. This is the second candidate for optimization. Function
fill_window took 13% of the time and was called "only" 22,180 times.
Perhaps, in this case, optimization could give results.

Having scrolled the report file to the middle (by the way, immediately after the table there is a detailed
information about all its columns, which is very convenient), we will get to the so-called
"call graph" (Call graph). It is a table divided into records,
separated from each other by a dotted line (repeating minus signs). Each
the record consists of several lines, while the second line, contrary to sound
meaning is called "primary" and describes the function to which the record is dedicated.
The line above is the description of the function that calls it, and below - the called ones.
her.

The columns contain the following information (from left to right): index (index, it is
only in the primary line and, in fact, does not mean anything); percentage of time
which goes to the execution of the function (% time); amount of time spent
for its execution in seconds (self); amount of time spent on
execution of the function and all functions called by it (children); number of calls
function (called) and its name (name).

The call graph turns out to be very useful when it comes to optimization.
someone else's code. Not only the bottlenecks of the program become visible, but the whole logic
her work, which may not be obvious when studying the source code.

GNU Coverage testing tool

In addition to gprof, the GCC compiler includes one more tool
profiling, which allows you to get a more detailed report on the execution
applications. The utility is called gcov and is designed to generate
called annotated source code, which is opposite each line
contains the number of executions. It may be necessary for deeper
studying the problems of the application, when the functions responsible for the "brakes" are found, and
the essence of the problem remains unclear (for example, it is not clear which line in
repeatedly nested loop inside a long function is responsible for
abnormal drop in performance).

Gcov cannot rely on statistics generated by an application when built with
flag "-pg", and requires rebuilding with flags "-fprofile-arcs" and "-ftest-coverage":

$ CFLAGS="-fprofile-arcs -ftest-coverage"
./configure && make

$ ./gzip ~/ubuntu-10.10-desktop-i386.iso

For each source code file, a call graph will be generated based on
which you can create a human-readable annotated
source:

$ gcov deflate.c
File "deflate.c"
Lines executed:76.98% of 139
deflate.c:creating "deflate.c.gcov"

The resulting file consists of three columns: number of executions
line, line number and line itself. At the same time, for lines that do not contain code, in
the first column will have a minus sign, and for lines that have never been executed -
sharp sequence: #####.

Google Performance Tools

Google Performance Tools(GPT for short) is a development by Google employees,
designed to find memory leaks and application bottlenecks. Like gprof,
GPT is not external to the application under test and
forces him to independently keep statistics of his performance. However
for this, the code that is not embedded at the application assembly stage is used, but
libraries that can be linked to the application at build time, or
connected at startup.

There are two plugin libraries available to developers in total: tcmalloc (which,
according to the authors of GPT, is the fastest implementation in the world
malloc functions, and also allows you to analyze how memory
consumed, allocated and flowing) and a profiler that generates a progress report
programs like gprof. Also included is the pprof utility,
designed for analysis and visualization of accumulated data.

The source code, as well as rpm and deb packages of this entire set, are available at
official page (code.google.com/p/google-perftools), but I wouldn't
advised to bother with manual installation, since the kit is available in
standard Fedora and Ubuntu repositories and can be installed with one simple
command:

$ sudo apt-get install google-perftools\libgoogle-perftools0
libgoogle-perftools-dev

$ LD_PRELOAD=/usr/lib/libprofiler.so.0.0.0 \
CPUPROFILE=gzip-profile.log ./gzip \
/home/j1m/ubuntu-10.10-desktop-i386.iso

However, Googlers themselves do not advise using this method (obviously due to problems
with programs written in C++), recommending that the library be linked during
assemblies. Well, let's not argue.

For experiments, we will take the same gzip and rebuild it again,
linking the binary with the right library:

$ cd ~/gzip-1.3.3
$ make clean
$ ./configure
$ LDFLAGS="-lprofiler" ./configure && make

gzip is now ready to log its execution again, but will not do so
default. To activate the profiler, you must declare a variable
CPUPFOFILE environment and assign it the path to the profile file:

$ CPUPROFILE=gzip-cpu-profile.log ./gzip \
~/ubuntu-10.10-desktop-i386.iso
PROFILE: interrupts/evictions/bytes = 4696/946/91976

As with gprof, the resulting report is in binary form and can be
read only using a special utility. In GPT, its role is played by
perl script pprof (in Ubuntu, to avoid confusion with another utility of the same name
it's renamed to google-pprof) which can generate not only tables and
annotated sources in the manner of gcov, but also visual call graphs. Total
there are 11 types of output of this utility, each of which is assigned
the corresponding command line argument is:

  1. Text (--text) - table similar to gprof output;
  2. Callgrind (--callgrind) - output in a format compatible with the kcachegrind utility (from the valgrind package);
  3. Graphical (--gv) - call graph, immediately displayed on the screen;
  4. Listing (--list= ) is an annotated listing of the specified function;
  5. Disassembled listing (--disasm= ) - annotated
    a disassembled listing of the specified function;
  6. Symbolic (--symbols) - listing decoded symbolic names;
  7. Graphics file (--dot, --ps, --pdf, --gif) - call graph saved
    to a file;
  8. Raw (--raw) - preparing a binary profile file for transmission over the network
    (recoded with printable characters).

Of greatest interest to us are text ("--text") and graphic
("--gv") call types. Only they can give complete information about the implementation
application and all its problem areas. The text output is generated as follows
way:

$ google-pprof --text ./gzip gzip-cpu-profile.log

As you can see in the screenshot, the output is a table listing all
functions and costs for their implementation. At first glance, it looks very similar to
table generated by the gprof utility, but this is not the case. Being just
library, GPT cannot keep program execution statistics in the same detail
and exactly how the code embedded directly into the application does it. Therefore, instead of
records of all facts of calling and exiting functions (behavior of a program compiled with
flag "-pg"), GPT uses a technique called sampling. One hundred times per second
the library activates a special function whose task is to collect data about
at what point in this moment the program is being executed, and writing
this data into a buffer. Upon completion of the program, these data are used to form and
profile file is written to disk.

That is why pprof output does not contain information about how many times the function was
called during the program's running time, or what percentage of the time was spent on it
execution. Instead, for each function, the number of checks is specified, during
time of which it was found out that at the moment the program was engaged in
performance of this function. Therefore, the number of checks given for each
function can be safely considered as the total time of its execution.

In all other respects, the table strongly resembles the output of gprof: by function on
row, by index per column. There are six columns in total:

  1. The number of checks for this function;
  2. Percentage of checks for all other program functions;
  3. The number of checks for this function and all its descendants;
  4. The same number as a percentage of the total number of checks;
  5. Function name.

At first, this approach to measuring execution time seems too
inaccurate, but if you compare the tables obtained with gprof with the tables
pprof, it becomes clear that they show the same picture. Moreover, GPT
allows you to change the number of checks per second of time using a variable
CPUPROFILE_FREQUENCY environment, so that the accuracy can be increased by ten, one hundred
or a thousand times, if the situation so requires (for example, if necessary
profile the execution of a very small program).

The undoubted advantage of GPT over gprof is the ability to represent
information in graphical form. To activate this function, pprof should
run with the "--gv" flag (by the way, to display the graph,
utility of the same name):

$ google-pprof --gv ./gzip gzip-cpu-profile.log

The graph of function calls generated as a result of executing this function is very
visual and much easier to understand and study than similar
text graph generated by the gprof command. Name and execution statistics of each
functions are placed in rectangles, the size of which is directly proportional to
the amount of time spent executing the function. Inside the rectangle
posted data on how much time it took to execute the function itself and its
descendants (time is measured in checks). Relationships between rectangles indicate
on the order of function calls, and numerical values, indicated next to the links -
for the duration of the called function and all its descendants.

Another advantage of GPT is the ability to use different levels
granularity for data output, allowing the user to select units
crushing. The default unit is a function, so
any pprof output is logically divided into functions. However, if desired, as
crushing units can be used source code lines (argument "--lines"),
files ("--files") or even physical memory addresses ("--addresses"). Thanks to
such GPT functionality is very convenient to use to find bottlenecks in
large applications, when you first analyze the performance level
individual files, then go to functions, and finally find the problematic
place at the level of source code or memory addresses.

And the last. As I said above, GPT is not only a good profiler,
but also a tool for finding memory leaks, so it has one very
a nice side effect in the form of the ability to analyze memory consumption
application. To do this, the application must be built or run with support.
tcmalloc library, and the HEAPPROFILE variable contains the address for placing
profile file. For example:

$ LD_PRELOAD=/usr/lib/libtcmalloc.so.0.0.0 \
HEAPPROFILE=gzip-heap-profile.log\
./gzip ~/ubuntu-10.10-desktop-i386.iso
Starting tracking the heap
Dumping heap profile to gzip-heap-profile.log.0001.heap (Exiting)

The ending 0000.heap will be added to the resulting file. If set on
this file to the pprof utility and specify the "--text" flag, it will display the table
functions and the level of memory consumption of each of them. The columns mean the same
the same as in the case of normal profiling, with the exception that instead of
the number of checks and their percentages, the table now contains the number
consumed memory and percentage of total memory consumption.

If necessary, this information can be obtained in graphical form, as well as
change units of crushing. The library can be customized with various
environment variables, the most useful of which is called HEAP_PROFILE_MMAP.
It enables profiling for the mmap system call (default GPT
collects statistics for malloc, calloc, realloc, and new calls only).

A few words about Valgrind

In the last part of the article, we will briefly look at how to use
tool Valgrind for application profiling. Valgrind is very powerful
a memory debugger that is able to find such memory errors that
other utilities don't even suspect. It has a modular architecture that
over time allowed it to acquire several plugins that are not related
straight to debugging. There are three such plugins:

  1. Cachegrind - allows you to collect statistics on hitting data and
    program instructions into the cache of the first and second levels of the processor (powerful and
    a complex tool that is useful when doing profiling
    low-level code).
  2. Massif is a heap profiler similar in functionality to its analogue from the GPT package.
  3. Callgrind is a profiler much like gprof and GPT.

By default, Valgrind uses memcheck as the main plugin.
(memory debugger), so to run it in profiling mode, you need to
specify the desired plugin manually. For example:

$ valgrind --tool=callgrind ./program

After that, a file will be created in the current directory with the name
callgrind.out.PID program, which can be analyzed using the utility
callgrind_annotate or the graphics program kcachegrind (installed
separately). I will not describe the format of the data generated by these programs.
(it is well represented in the man pages of the same name), I will only say that
callgrind_annotate is best run with the "--auto" flag so that it can
find the source files of the program on your own.

To analyze memory consumption, Valgrind should be run with the "--tool=massif" argument.
After that, the massif.out.PID program file will appear in the current directory, which
can be parsed using the ms_print utility. Unlike pprof, she
can display data not only in the form of a standard table, but also generate
beautiful ascii-art graphics.

conclusions

Tools such as gprof, gcov and GPT allow you to analyze the work
application and identify all its bottlenecks up to a separate processor
instructions, and by connecting Valgrind to the profiling process, you can achieve
amazing results.

INFO

By default, gprof does not display profile information for functions.
libc libraries, but the situation can be corrected by installing the libc6-prof package and
building the testable with the libc_p library: "export LD_FLAGS="-lc_p"".

You can activate the GPT profiler not only with the help of an environment variable
CPUPROFILE, but also framing the tested section of code with the ProfilerStart() functions
and ProfilerStop() which are declared in google/profiler.h.

WARNING

Due to security requirements, GPT will not work for applications with
with the SUID bit set.

  • 17. Analysis of emerging bottlenecks in the enterprise.
  • 18. Methods for calculating investments.
  • 19. Calculation of the production result for the short term.
  • 21. Commission remuneration of sales representatives based on coverage amounts.
  • 22. Circles of quality.
  • 23. Analysis of discounts.
  • 24. Analysis of sales areas.
  • 25. Functional cost analysis.
  • 26. Xyz analysis.
  • 27. Own production - supplies from outside.
  • 28. Experience curve.
  • 29. Competition analysis.
  • 30. Logistics.
  • 31. Portfolio analysis.
  • 33. Product life cycle curve.
  • 34. Analysis of the strengths and weaknesses of the enterprise.
  • 36. Development of scenarios.
  • 37. The sequence of stages in the design of the controlling process in the organization.
  • 38. Organizational and operational phases of the design of the controlling process.
  • 40. Socio-psychological factors of resistance to the new concept of management at the enterprise: group resistance.
  • 41. Tasks of the controller in the enterprise.
  • 42. Requirements for professional and personal properties of controllers.
  • 43. Examples of the main functional roles of the controller.
  • 46. ​​The main types of organizing controlling.
  • 47. Options for positioning the controlling service.
  • 48. Prerequisites for the organization of the controlling service in the enterprise.
  • 49. Centralized and decentralized controlling.
  • 50. The role and tasks of the main controller in the enterprise.
  • 51. Concepts of controlling in relation to accounting tasks.
  • 52. Place (role and tasks) of decentralized controllers in the structure of the enterprise.
  • 53. Advantages and disadvantages of creating an independent controlling service in an organization.
  • 54. The conflict of the controller and leader: the nature and types of conflict.
  • 55. Functional approach to collecting information for making managerial decisions: disadvantages and advantages.
  • 56. Automation of information processing when implementing the concept of controlling.
  • Automation.
  • 58. Single information space: essence, necessity of creation (conditions), possibilities.
  • 59. Integrated management information system as a computer-based management tool. The main blocks of UIs and their functions.
  • 61. Eis: purpose, main characteristics.
  • 62. Erroneous assumptions in the design of UIs.
  • 63. Information reengineering: essence, main stages.
  • 65. General economic and specific managerial meaning of planning the company's activities.
  • 17. Analysis of emerging bottlenecks in the enterprise.

    The task of operational planning of the production program is to determine the range and volume of products. To do this, the following information must be known:

    1) product prices;

    2) production costs;

    4) available production facilities.

    Problems of planning a production program

    is determined primarily by the type and number of bottlenecks in production. In addition, possible alternative technological processes are of importance. We are talking about the installed equipment and the intensity of its use in the production process.

    There are various approaches to planning the production program.

    There are three fundamental approaches in the enterprise:

    a) No bottlenecks.

    Since there are no bottlenecks, all products can be produced.

    b) The presence of one bottleneck.

    Let's assume that it is established that there is one bottleneck in the enterprise. It is necessary to distinguish between cases of the only and possible alternative technological process.

    If the variable costs per unit of time are the same for all products, then you need to check whether the coverage amounts are positive for all products and processes or negative for certain combinations of products and processes.

    If the sales proceeds and variable costs per unit of production, and hence the amount of coverage, are known, then the optimal production program can be formed in stages. Focusing on the amount of coverage allows you to sequentially build the program if there is only one bottleneck.

    c) The presence of several bottlenecks.

    If, when checking sales and production programs, it turns out that there are several bottlenecks in production at once, then it is more difficult to make a decision. In this case, linear programming methods should be used.

    The planning of an optimal production program should not be carried out exclusively from a cost point of view, but profit-oriented criteria must be taken into account. Total cost calculation data are not sufficient for planning an optimal production program, since such calculations do not divide costs into variable and fixed costs. In addition to costs, it is necessary to take into account the impact management decisions on sales proceeds and coverage. In this regard, it is required to use the data of calculations of the coverage amounts.

    The presence of one bottleneck can be explained by two reasons:

    a) if the production process is single-stage, then the existing capacity is not enough to produce the maximum possible amount of all products with positive coverage amounts;

    b) if the production process is multi-stage, then the bottleneck occurs only in one area, the capacity of which is not enough to produce all products.

    If the enterprise has a bottleneck, it is necessary to calculate the relative amounts of coverage per unit of bottleneck loading time for individual product groups. With this in mind, it is necessary to change the ranked sequence of production of products in order to achieve the optimal value of the production result. Determining the sales and production program without taking into account the available capacity at the bottleneck results in a lower overall coverage amount. This is the wrong decision, because in this case the company loses its coverage amounts.

    background
    In October 2010, as part of the organization of projects to improve the efficiency
    aircraft factory for 2011, the company "Rightstep" performed diagnostics of the main
    factory production. The main purpose of the survey was to identify bottlenecks, i.e. those facilities, management procedures and divisions that limited the entire output of the plant.
    According to the results of the analysis, the main bottlenecks» of the plant were identified (a potential bottleneck was also the procedures (or rather, their absence) of maintaining the electronic composition of the product):
    1) assembly shop ASC1;
    2) methods of planning and production management;
    3) shop ShTs (stamping), shop MC (mechanical)
    This article describes the "joining" of the "bottleneck" in the shop ASC1.

    The ASC1 shop is the first in the sequential chain of machine assembly (there the product begins to be assembled from the units, then it is transferred to the other assembly shops, ASC2 and DSP), which is the "apex of the triangle" of the intra-factory supply chain and is the consumer of the rest of the "detail-making" shops (DDC ). Or - the beginning of the "pipeline" of moving the product along the assembly chain.

    Consequently, any problem that arises in the ASC1 workshop and limits the start of assembly of products automatically leads to a limitation in the production of machines by the entire plant.
    And in the fall of 2010, the ASC1 shop was such a bottleneck, with an average output of 6 items per month, with a factory plan of 7-8. The main problems of ASC1 were:
    1) non-synchronism in the supply of parts and assembly units from other workshops to the ACS workshop (read - constant "unexpected" deficits in the assembly)
    due to the actual absence of a calculated order-by-order (machine-by-machine) supply plan;
    2) extremely inefficient internal organization work in the workshop, with the main symptoms (not causes!): “no people”, “defective parts”, “no place, nowhere to put products”.

    In fact, the problems of ASC1 were a reflection of problems in the management and organization of production of the entire plant. And above all:
    1) the actual absence of a machine-by-machine nomenclature plan synchronized between the “detail-making” and “assembly-assembly” (DDC and ASC) shops, which led to the release of not what was needed and in the wrong quantity, as a result - to work “according to deficits” and, in ultimately, to the disruption of the assembly schedule;
    2) piecework wages, which allow and force the workshops to pursue, first of all, “rolling”, even in “bottleneck” workshops, while not always taking into account deficits.

    Choice of concept

    Based on the results of data analysis and discussion of possible ways to "clear" the bottleneck, the following areas of transformations were identified.

    First: changing the production management system so that it forces to produce only what is needed at a relatively low cost. For this it was necessary:
    1) organize a system of pulling order-by-order nomenclature shop planning, a system for monitoring deliveries and “closing” of machines;
    2) through a change in the motivation system (modification of the “deal”) to motivate the shops to fulfill, first of all, the specified plan;
    3) provide the ability to manage the production and supply process through visualization and monitoring of what is happening.

    Second: change in the production organization system of the workshop through:
    1) optimization of intrashop flows of movement of parts and assemblies,
    2) elimination of all unnecessary both production and non-production operations on the way to creating a machine,
    3) providing visualization of what is happening, the status of the present situation, future and present problems,
    4) reduction of launch batches and movements throughout the production chain.

    To implement these transformations, the tools of SCM (“manufacturing chain management”), Lean (“lean manufacturing”) and TOC (“Theory of Constraints”) of production management methods were chosen.

    Works in the first direction, the setting "Planning and Monitoring System of the Plant" began to be implemented through the introduction for the entire plant of new processes (procedures) of synchronized (according to the assembly and shipment schedule of machines) planning and production management, plus the introduction of the Lean IT Planning and Monitoring System that supports them produced by SCM.

    Works in the second direction were accepted for implementation using more traditional but "fitted" Lean and TOC tools for use at the plant.

    Transformations. New organization within the shop ASC1

    The project of transformations in ASC1 was started in January 2011, but then, due to certain changes in the workshop, was stopped.

    The results of the project presented below were achieved in just a few months, incl. thanks to the decisive and principled position of the shop management. And, looking ahead, we note that the main goal of the project is to increase throughput workshops from 6 to 8 machines per month, with:
    non-increase in operating costs (payroll, number of workers, etc.) and stocks of parts and WIP - was achieved.

    Optimization of off-the-shelf assembly of products

    Physical location of products. Working with space

    Based on the results of the analysis, it was determined that one of the “bottlenecks” of ASC1 was the physical organization of the off-gauge assembly site. The site was cluttered with old equipment / mezzanines, unnecessary templates, parts and other nonsense that was not actually used in the production of machines of existing modifications.

    Because of this, it was possible to place a maximum of 3-4 simultaneously assembled machines on the site of off-gauge assembly. Moreover, in extremely cramped and suboptimal conditions.



    This would be sufficient for ideal organization assembly work and with perfect adherence to the schedule of deliveries of parts from other workshops. But, "in real world”, if there were problems with any product, it slowed down the assembly, incl. slipway of all other machines. And, the assembly teams simply did not have the physical ability to switch to another machine.
    As a result, it was decided to demolish unnecessary equipment, clear the site, and organize two “lines” for assembling machines on the site. In the course of carrying out these works, methods of ergonomic organization of the workspace according to 5C were used. See diagram and photo.



    As a result, it is now possible to place 6 machines, including delivery machines, at the free assembly site, and this is with an incomparably better and more convenient organization of workplaces.

    Transfer of operations from the final assembly of the product to other areas.

    According to the results of the analysis of the site of the off-gauge assembly, which was the “bottleneck” of the workshop, numerous “extra” operations were identified, i.e. operations that could be performed more efficiently in other areas and less qualified personnel. Some examples - see photo.

    After a thorough analysis and discussions with shop technologists, these operations were transferred to other, less loaded areas, freeing up the assemblers' time from "non-core" operations.

    Change in the accrual system wages workers

    As part of the reforms, the payroll system for workers was changed.
    The wage fund was explicitly calculated based on the production plan, the fact depended on the number of machines made and transferred to the next shop in the chain.
    Further, this amount was distributed among the members of the assembly teams (foremen), depending on the qualifications of the workers and the coefficient of labor participation.

    Alarm system

    In addition, it was decided to build a flexible workflow structure in the workshop, focused on creating maximum conditions for
    production worker and alarm/solving all his needs/problems online, as shown below:

    In order to quickly respond to the needs of the performers, the above chain decided to use visualization tools, such as signal lights. Each sector of the site is planned to be equipped with two types of green and red light bulbs and buttons for turning them on.

    green lamp signals that the sector is fully supplied with parts, there is tooling for manufacturing and the current assembly needs are fully understood (i.e. the situation is normal).

    red light bulb- this is a signal that the sector needs to solve the problems of one of the three directions, and the foreman of the site should respond to this request as quickly as possible and take measures to resolve it as soon as possible, or notify other performers if the issue affects their competence.

    yellow- the problem exists, but in the process of being solved.

    Optimization of the shop detailed assembly area

    Supply assurance system from the detailed assembly shop shop

    After carrying out the above transformations, the throughput capacity of the off-gauge assembly site was increased to 8 cars per month. But, almost immediately, the “bottleneck” of the ASC1 shop moved to the detailed assembly sections of the shop.

    Concerning, new organization was implemented at the detailed assembly section of the workshop, the site that manufactures and directly supplies assemblies for off-the-shelf assembly. The work was completed in about a month, according to the methodology proposed by Rightstep:
    1) optimization of the organization of workplaces of the site according to the principles of "5C";
    2) installation of a visualization system;
    3) organization of a system of pull planning and supply of parts for assembly, using the "supermarket" and "kanban" methods.



    The introduced new organization of production was so liked by the craftsmen and workers of other sections of the workshop that the sections, in the literal sense of the word, “lined up” in line for implementation.

    Transformations. Ensuring timely deliveries to ASC1


    Planning and Monitoring System SCMo

    From the point of view of "external" conditions, the huge problem of the workshop was the non-rhythmic (non-synchronous with the rhythm of the assembly of specific machines) supply of parts from the DDC of the plant's workshops.
    The solution to this problem was carried out within the framework of a plant-wide project for setting up a system for synchronous order-by-order nomenclature inter-shop planning. As a methodology, the methodology of "pulling" (just in time and exactly in the amount to order) planning and the methodology of working with "buffers" and "priorities" of "bottlenecks" of the Theory of Constraints were taken.

    Lean was used as an implementation tool ERP system SCMo, providing on-line planning, management and monitoring of production and supply processes.
    The planning algorithm set up for the plant made it possible to generate order-by-order (for each machine or “bulk” order) nomenclature plan
    production and supply for each workshop covered by the system. With a color signaling / illumination of each batch of parts supplied by the supplying workshop, constantly updated upon production. See diagram below.

    As part of the transformation project in the ASC1 shop, using SCMo, the following processes were “correctly” set up:
    1) formation of the sequence of assembly of machines for the workshops ASC1 - ASC2 - DSP, and, for ASC1 - the formation of a delivery schedule, for specific machines and for specific days of the month (see the screen form below):

    2) on the basis of the schedule for the delivery of machines by the ASC1 workshop - to form a plan for the supply of parts and assemblies from workshops - suppliers. It has not been possible to fully automate this moment due to inaccuracies in the electronic composition of the product (machine). Because of this, it was decided to partially maintain electronic shortages in SCMo for suppliers' workshops, with the obligatory setting of the "promised date" by suppliers. In fact, these are “deficit logs” published on-line and accessible to all, which were previously kept by the shop floor dispatcher, and information from which became available to the shops and suppliers, often in a distorted form, and only at planning meetings.

    This was done within the framework of a new management methodology, shifted to the IT system, namely, ensuring maximum visualization of what is happening for all participants in the production chain (see below):

    A positive side effect - maintaining "electronic deficits" in SCMo - the possibility of switching to "electronic" planning meetings, the efficiency of which is much higher than traditional ones, and the time spent on them is less.

    Monitoring system of what is happening (video surveillance system)

    As part of this direction, to ensure maximum visualization of what is happening in production, a visualization system (video surveillance) was also introduced in the workshop, which works in on-line mode and allows, if necessary, to see what is really happening in the workshop at a given time.


    Project results

    1. The capacity of the shop has been increased from 6 to 8 machines per month.

    With: no increase in operating costs (payroll, number of workers, etc.) and stocks of parts and WIP.
    2. The System of Planning and Monitoring of deliveries was put into operation, synchronizing not only the release, but also the launch of all the workshops of the plant with a schedule
    aggregate and final assembly of machines.
    3. Complete transparency of what is happening in production is ensured.
    4. A basis has been provided for reaching a production rhythm of 9 cars per month in 2012.
    5. A "flywheel" of transformations has been launched, incl. and other parts of the workshop.

    Rightstep, Iris Partenaires

    Narrow places

    A bottleneck is a lack of production capacity in a process chain, determined by some component: equipment, personnel, materials or transportation; liquidated in the course of organizational and technical measures - "joining" bottlenecks.

    Bottlenecks can occur in enterprises for a variety of reasons. In the conditions of complex cooperation of various machines operating on modern enterprises, the nature of intra-production relations, the proportionality of individual workshops and sections of production cannot be given once and for all and unchanged. Improvement in the technique and technology of production, improvement in the organization of labor, a change in the nature of production in one area inevitably necessitate corresponding changes in other areas that are connected with it.

    Table 46. Bottlenecks

    Bottleneck

    Description of the problem

    Activities and expected result

    Workshop layout

    In the layout of the workshop, the machines are located perpendicular to the production line - this does not ensure the safety of workers standing behind the machines.

    It would be more optimal to arrange the machines in the so-called "herringbone" - at an angle to the line. This will ensure the safety of workers and more effective use workshop area.

    Transport work

    Transport in the workshop works as follows: at the beginning of the day, autocars arrive at the workshop, pick up workpieces from the warehouse and deliver them along production lines, then leave. At the end of the day, trucks start their work again: they pick up finished products from containers and take them to the appropriate warehouse. The rest of the time the cars are idle.

    Arrange for pickup and delivery finished products not at the end and beginning of the working day, but during the whole working time.

    The work of transporters and loaders

    The wages of the transporter and the loader are paid at the full rate, but they are not busy all day.

    You can pay loaders and transporters half the rate, because. their employment in the workshop is very small.

    Create a combination of professions - a transporter can also work as a loader.

    Conclusion

    AT term paper arrangements have been made production activities mechanical assembly shop. During the development process, the volume of production was calculated, it was determined required amount equipment, number of personnel, workshop area, wage fund of the main workers, auxiliary, managers, employees, specialists. The solution of issues of organizing production and managing it in the workshop was based on the study of product designs, technological processes for their manufacture, and the organization of work of employees of the enterprise.

    We will calculate the planned load of equipment and determine the "bottlenecks". We will build a production schedule and analyze the production program for its feasibility.

    Identification of bottlenecks in the production program. Calculation and balance of equipment load when planning production.

    Any production manager regularly asks himself the question “Will he be able to complete all planned orders on time. Is the production capacity of the enterprise sufficient for this? How intense is the work expected in this planning period?

    This video will demonstrate the modules of the TCS system, allowing, firstly, calculate and analyze volume indicators of equipment loading in the time period of interest, and secondly, visualize calendar plan production in the form of a Gantt chart with simultaneous display of the load of the equipment of interest.

    So, as initial data in the TCS system, production orders for finished products have already been created - mounting cabinets in various configurations and quantities, and an order for the manufacture of unified components own production to maintain stock levels.

    Each of these orders has an estimated release date. For orders marketable products this is usually the terms of the contract, for an internal order it is approximately the middle of the month. Recall that we have a certain stock of unified components (reserve) in stock, from which orders will be completed in the first half of the month. And the positions of the internal order made by the middle of the month will be used to restore the warehouse reserve and complete the remaining orders of the period.

    The next step is to calculate launch dates for commodity items and their components, as well as components manufactured by a separate order for the unified parts warehouse. Let's select all the production specifications of the planned period and run the macro " Launch/Release Date Calculation".

    As a result, for all manufactured parts and assembly units, we get approximate dates for the start and end of production, calculated based on the given deadlines and the applied technological processes.

    Let's make these production specifications working, and we will get on the corresponding tab nomenclature production plan. It lists the items with the quantity to be produced and the timing.

    So, the earliest release date for the batch is February 18, the latest release date is March 23, 2010.

    On the tab "Technical process" provides more detailed information, namely the plan for operations. Those. a list of all the work that needs to be done to produce all the planned items. For each work, the complexity of its implementation is calculated, and also in accordance with technological process equipment, workshop, site, profession and category are displayed.

    Also, the TCS system maintains information about the machine park of the enterprise, i.e. the actual quantity of each model of equipment and their availability in departments. For example, we have an Amada press and a FINN-POWER press in the first section of the seventh workshop, welding equipment in the second section, and tables for assembly and control in the third.

    To assess the feasibility of this plan, we use the module "" of the TCS system. Let's set the start and end dates of the period in which the planned work is supposed to be carried out, namely February 18 and March 23, 2010. Let's do the calculation.

    As a result of the calculation, a list of all models of equipment used to perform the work is displayed. It indicates which groups it belongs to and where it is located. For each model, the working time fund is calculated in hours for a given period. The calculation takes into account the amount of this equipment in the unit and the schedule of its scheduled repairs and maintenance. Also, it is calculated how many hours in total this equipment will be busy performing scheduled operations. The last column shows the planned load.

    In practice, depending on the size of the enterprise and its structure this list can be very large (many workshops, sections, models). Realistically, it can be difficult to work with such a volume of information. Therefore, for convenience, you can use various settings.

    For example, to show the loading of only one unit of interest to us. Let's choose the first section of the twelfth workshop or the second section of the seventh workshop. You can show the download only of interest to us Equipment groups, for example, Control . The equipment of this group is present in different divisions of the enterprise.

    To quickly identify potential bottlenecks in our production plan, it is enough to enter load threshold. Let's introduce 70%, assuming that the equipment, the load of which exceeds 70-80% in the planned period, constitutes the so-called risk group. Hide lines with less load. In our example, only the hydraulic coordinate turret press FINN-POWER will load more than 70%, i.e. for the plan for March, it is the very bottleneck.

    An accidental failure of this equipment can lead to failures in the execution, if not of the entire plan, then of many orders of the planned period. Which usually leads not only to financial penalties, but also to non-financial losses. For example, this negative event can also affect business reputation enterprises.

    We will study what equipment should also be considered Special attention. We enter a threshold value of 50% and simply color such lines in the selected color. Amada press brake has been added to FINN-POWER, its estimated load is 57%. All other workshops and equipment in them are not so heavily loaded and, most likely, will not require increased attention from the planner.

    Thus, using the module "", we can draw the following conclusions:

    Let's implement or not implement, in principle, our plan. The criterion for this assessment will be the excess of 100% load for any model. If somewhere the load is more than 100%, then no one will help modern methods production schedule optimization. In this case, it is necessary to increase the equipment operation fund, i.e. either increase the time period, or hire additional staff who will work on the second shift, or run a second piece of equipment nearby.
    The plan of our example does not have any position, in which the load value would exceed 100%. This means that, at least theoretically, it is possible to complete the given amount of work on time using the existing equipment. We will implement or not implement the plan in the realities of our production. This assessment also makes it possible to draw a conclusion about the feasibility of the presented plan, but not theoretically, like the first one, but closer to life and individual characteristics each production. For example, it is obvious that the equipment load of 99% will allow the plan to be fulfilled only under operating conditions without failures, delays and downtime, when all systems are duplicated and robots work at the enterprise. In reality, failures and delays happen regularly for various reasons. Either the material was not delivered on time, or the machine was not adjusted, or the worker fell ill, or there was an accident in the power grid, etc. etc. Therefore, at each enterprise, even for different workshops and sections of this enterprise or different kind works this criterion has different meaning. For example, for one section, a load of 80% is considered critical, and for another - 60%.
    Those. for each type of work or area, a comparison can be made with the corresponding individual threshold value, which experienced planners usually know from practice. Does the structure of the machine park existing at the enterprise correspond to the production program. Such a conclusion will be especially useful for enterprises with a stable production program, i.e. whose production plan can be built in advance, and it is not subject to strong changes from month to month.
    In our example, most equipment models are not loaded even by 40%, while the loading of the FINN-POWER press reaches a critical value. If such a state of affairs took place in serial production, then in order to increase the volume of production, we should first of all buy procurement equipment.



    
    Top