Tag Archives: IP

Superlative statements such as "some regard [Plovdiv]] as Europe's oldest cities" need to be sourced to high-quality sources. Much like traditional tar it can still be generated and extracted in a stream fashion though. However, the code-bases are distinct and without interdependencies, and casync works fine both on systemd systems and systems without it. Any suggestions for a new skyline image? Some of its main advantages include the ability to return search results depending on the context, search in different languages used throughout the organization as well as broad settings options.

What is the Cloud? Sync Vs Backup Vs Storage

What is The Difference Between Cloud Sync and Backup?

While keeping a close eye on your favourite books, the system also allows creation of a complete library catalogue system with the help of a MySQL database. Users of the library can log into the system with a barcode scanner, and take out or return books recorded in the database guided by an LCD screen attached to the Pi. We love books and libraries. Did I say we love books?

In fact we love them so much that members of our team have even written a few. Fancy adding some Pi to your home library? Check out these publications from the Raspberry Pi staff:. The post Shelfchecker Smart Shelf: Post Syndicated from nellyo original https: EU Law , Media Law. Post Syndicated from Lennart Poettering original http: Its goal is to provide a friendly and collaborative gathering place for individuals and communities working to push these technologies forward.

Full presentation slots are minutes in length and lightning talk slots are minutes. We are now accepting submissions for presentation proposals. In particular, we are looking for sessions including, but not limited to, the following topics:. While our focus is definitely more on the user-space side of things, talks about kernel projects are welcome too, as long as they have a clear and direct relevance for user-space. Please submit your proposals by September 3rd.

Notification of acceptance will be sent out weeks later. To submit your proposal now please visit our CFP submission web site. For further information about All Systems Go! Thus, anything you think was appropriate for submission to systemd. In the past months I have been working on a new project: It combines the idea of the rsync algorithm with the idea of git -style content-addressable file systems, and creates a new system for efficiently storing and delivering file system images, optimized for high-frequency update cycles over the Internet.

Its current focus is on delivering IoT, container, VM, application, portable service or OS images, but I hope to extend it later in a generic fashion to become useful for backups and home directory synchronization as well but more about that later. To briefly name a few: Docker has a layered tarball approach, OSTree serves the individual files directly via HTTP and maintains packed deltas to speed up updates, while other systems operate on the block layer and place raw squashfs images or other archival file systems, such as IS for download on HTTP shares in the better cases combined with zsync data.

Neither of these approaches appeared fully convincing to me when used in high-frequency update cycle systems. In such systems, it is important to optimize towards a couple of goals:. Here, revision control a tool for the developer is intermingled with update management a concept for optimizing production delivery. To counter that OSTree supports placing pre-calculated delta images between selected revisions on the delivery servers, which means a certain amount of revision management, that leaks into the clients.

Delivering direct squashfs or other file system images is almost beautifully simple, but of course means every update requires a full download of the newest image, which is both bad for disk usage and generated traffic. On the other hand server requirements in disk space and functionality HTTP Range requests are minus points for the use-case I am interested in.

They only point I am trying to make is that for the use case I care about — file system image delivery with high high frequency update-cycles — each system comes with certain drawbacks. Specifically, the tarball format is famously nondeterministic: Some tar implementations attempt to correct that by guaranteeing that each file tree maps to exactly one valid serialization, but such a property is always only specific to the tool used.

So much about the background why I created casync. The chunking algorithm is supposed to create variable, but similarly sized chunks from the data stream, and do so in a way that the same data results in the same chunks even if placed at varying offsets. For more information see this blog story. As an extra twist, we introduce a well-defined, reproducible, random-access serialization format for file trees think: Do the same with the chunk store, and share it between the various index files you intend to deliver.

Why bother with all of this? Streams with similar contents will result in mostly the same chunk files in the chunk store.

This means it is very efficient to store many related versions of a data stream in the same chunk store, thus minimizing disk usage. Moreover, when transferring linear data streams chunks already known on the receiving side can be made use of, thus minimizing network traffic.

Why is this different from rsync or OSTree, or similar tools? Well, one major difference between casync and those tools is that we remove file boundaries before chunking things up. This means that small files are lumped together with their siblings and large files are chopped into pieces, which permits us to recognize similarities in files and directories beyond file boundaries, and makes sure our chunk sizes are pretty evenly distributed, without the file boundaries affecting them.

SHA is used as strong hash function to generate digests of the chunks. The diagram shows the encoding process from top to bottom. It starts with a block device or a file tree, which is then serialized and chunked up into variable sized blocks. The compressed chunks are then placed in the chunk store, while a chunk index file is written listing the chunk hashes in order. The original SVG of this graphic may be found here. Note that casync operates on two different layers, depending on the use-case of the user:.

You may use it on the block layer. In this case the raw block data on disk is taken as-is, read directly from the block device, split into chunks as described above, compressed, stored and delivered. You may use it on the file system layer.

In this case, the file tree serialization format mentioned above comes into play: The fact that it may be used on both the block and file system layer opens it up for a variety of different use-cases. In the VM and IoT ecosystems shipping images as block-level serializations is more common, while in the container and application world file-system-level serializations are more typically used.

Chunk index files referring to block-layer serializations carry the. Note that you may also use casync as direct tar replacement, i. Such files carry the. Finally, chunk stores are directories carrying the.

This of course is useful whenever updating an image: Note that using seeds requires no history relationship between seed and the new image to download. This has major benefits: When operating on the file system level, the user has fine-grained control on the meta-data included in the serialization.

When doing personal backups OTOH file ownership matters little but file modification times are important. Moreover different backing file systems support different feature sets, and storing more information than necessary might make it impossible to validate a tree against an image if the meta-data cannot be replayed in full.

The precise set of selected meta-data features is also always part of the serialization, so that seeding can work correctly and automatically. This means that besides the usual baseline of file meta-data file ownership and access bits , and more advanced features extended attributes, ACLs, file capabilities a number of more exotic data is stored as well, including Linux chattr 1 file attributes, as well as FAT file attributes you may wonder why the latter?

In the future I intend to extend this further, for example storing btrfs sub-volume information where available. Smaller chunks increase the number of generated files in the chunk store and increase HTTP GET load on the server, but also ensure that sharing between similar images is improved, as identical patterns in the images stored are more likely to be recognized.

By default casync will use a 64K average chunk size. Tweaking this can be particularly useful when adapting the system to specific CDNs, or when delivering compressed disk images such as squashfs see below. Emphasis is placed on making all invocations reproducible, well-defined and strictly deterministic. As mentioned above this is a requirement to reach the intended security guarantees, but is also useful for many other use-cases. Moreover the casync mtree command may be used to generate a BSD mtree 5 compatible manifest of a directory tree,.

The file system serialization format is nicely composable. By this I mean that the serialization of a file tree is the concatenation of the serializations of all files and file sub-trees located at the top of the tree, with zero meta-data references from any of these serializations into the others.

This property is essential to ensure maximum reuse of chunks when similar trees are serialized. When extracting file trees or disk image files, casync will automatically create reflinks from any specified seeds if the underlying file system supports it such as btrfs , ocfs , and future xfs. After all, instead of copying the desired data from the seed, we can just tell the file system to link up the relevant blocks. This works both when extracting. This works on all UNIX file systems, and can save substantial amounts of disk space.

In this mode, casync exposes OSTree-like behavior, which is built heavily around read-only hard-link trees. Implicitly, file systems such as procfs and sysfs are excluded from serialization, as they expose API objects, not real files.

This is particularly useful when transferring container image for use with Linux user name-spacing. In addition to local operation, casync currently supports HTTP, HTTPS, FTP and ssh natively for downloading chunk index files and chunks the ssh mode requires installing casync on the remote host, though, but an sftp mode not requiring that should be easy to add.

When creating index files or chunks, only ssh is supported as remote back-end. When operating on block-layer images, you may expose locally or remotely stored images as local block devices. Chunks are downloaded on access with high priority, and at low priority when idle in the background.

Similar, when operating on file-system-layer images, you may mount locally or remotely stored images as regular file systems. Note that special care is taken that the images exposed this way can be packed up again with casync make and are guaranteed to return the bit-by-bit exact same serialization again that it was mounted from. This will create a chunk index file foobar. This command operates on the file-system level. A similar command operating on the block level:.

This command creates a chunk index file foobar. Note that you may as well read a raw disk image from a file instead of a block device:. To reconstruct the original file tree from the. The above are the most basic commands, operating on local data only. This extracts the specified. This of course assumes that foobar. You can use any command you like to accomplish that, for example scp or rsync. Alternatively, you can let casync do this directly when generating the chunk index:.

This will use ssh to connect to the ssh. If you do not do that, then the store path is automatically derived from the path or URL: Of course, when extracting. When creating chunk indexes on the file system layer casync will by default store meta-data as accurately as possible. This command will create a chunk index for a file tree serialization that has three features above the absolute baseline supported: In this mode, all the other meta-data bits are not stored, including nanosecond time-stamps, full UNIX permission bits, file ownership or even ACLs or extended attributes.

As mentioned, casync is big about reproducibility. This digest will include all meta-data bits casync and the underlying file system know about. Usually, to make this useful you want to configure exactly what meta-data to include:.

It is a shortcut for writing out: This generates a digest with the most accurate meta-data, but leaves one feature out: The former command will generate a brief list of files and directories, not too different from tar t or ls -al in its output.

The latter command will generate a BSD mtree 5 compatible manifest. Note that casync actually stores substantially more file meta-data than mtree files can express, though. Instead, the tool is supposed to find a good middle ground, that is good on traffic and disk space, but not at the price of convenience or requiring explicit revision control. They have very different use-cases and semantics. For example, rsync permits you to directly synchronize two file trees remotely.

To make the tool useful for backups, encryption is missing. I have pretty concrete plans how to add that. When implemented, the tool might become an alternative to restic , BorgBackup or tarsnap. Right now, if you want to deploy casync in real-life, you still need to validate the downloaded.

It is my intention to integrate with gpg in a minimal way so that signing and verifying chunk index files is done automatically. In future it will also propagate progress data this way and more. I intend to a add a new seeding back-end that sources chunks from the local network. After downloading the new. This should speed things up on all installations that have multiple similar systems deployed in the same network.

Further plans are listed tersely in the TODO file. Is this a systemd project? However, the code-bases are distinct and without interdependencies, and casync works fine both on systemd systems and systems without it.

Specifically this means that I am not too enthusiastic about merging portability patches for OSes lacking the openat 2 family of APIs. Does casync require reflink-capable file systems to work, such as btrfs? While I have been working on it since quite some time and it is quite featureful, this is the first time I advertise it publicly, and it hence received very little testing outside of its own test suite. I am also not fully ready to commit to the stability of the current serialization or chunk index format.

I also intend to correct that soon. Why are you reinventing the wheel again? I am pretty sure I did my homework, and that there is no tool just like casync right now.

The tools coming closest are probably rsync , zsync , tarsnap , restic , but they are quite different beasts each. Why did you invent your own serialization format for file trees? The serialization casync implements places a focus on reproducibility, random access, and meta-data control. Much like traditional tar it can still be generated and extracted in a stream fashion though. What about delivering squashfs images? How well does chunking work on compressed serializations? This fact is beneficial for systems employing chunking, such as casync as this means single bit changes might affect their vicinity but will not explode in an unbounded fashion.

How precisely to choose both values is left a research subject for the user, for now. What does the name casync mean? It makes use of the content-addressable concept of git hence the ca- prefix. Where can I get this stuff? Is it already packaged? I just tagged the first version. Martin Pitt has packaged casync for Ubuntu.

There is also an ArchLinux package. If you are involved with projects that need to deliver IoT, VM, container, application or OS images, then maybe this is a great tool for you — but other options exist, some of which are linked above. Note that casync is an Open Source project: I also intend to talk about it at All Systems Go! To very incomprehensively and briefly name a few: On the other hand server requirements in disk space and functionality HTTP Range requests are minus points for the usecase I am interested in.

Specifically, the tarball format is famously undeterministic: Note that casync operates on two different layers, depending on the usecase of the user:. The fact that it may be used on both the block and file system layer opens it up for a variety of different usecases.

When operating on the file system level, the user has fine-grained control on the metadata included in the serialization. Moreover different backing file systems support different feature sets, and storing more information than necessary might make it impossible to validate a tree against an image if the metadata cannot be replayed in full. The precise set of selected metadata features is also always part of the serialization, so that seeding can work correctly and automatically.

This means that besides the usual baseline of file metadata file ownership and access bits , and more advanced features extended attributes, ACLs, file capabilities a number of more exotic data is stored as well, including Linux chattr 1 file attributes, as well as FAT file attributes you may wonder why the latter? In the future I intend to extend this further, for example storing btrfs subvolume information where available.

As mentioned above this is a requirement to reach the intended security guarantees, but is also useful for many other usecases. By this I mean that the serialization of a file tree is the concatenation of the serializations of all files and file subtrees located at the top of the tree, with zero metadata references from any of these serializations into the others. In this mode, casync exposes OSTree-like behaviour, which is built heavily around read-only hardlink trees. Implicitly, file systems such as procfs and sysfs are exluded from serialization, as they expose API objects, not real files.

This is particularly useful when transferring container image for use with Linux user namespacing. When creating index files or chunks, only ssh is supported as remote backend. When creating chunk indexes on the file system layer casync will by default store metadata as accurately as possible. In this mode, all the other metadata bits are not stored, including nanosecond timestamps, full unix permission bits, file ownership or even ACLs or extended attributes. As mentioned, casync is big about reproducability.

This digest will include all metadata bits casync and the underlying file system know about. Usually, to make this useful you want to configure exactly what metadata to include:.

This generates a digest with the most accurate metadata, but leaves one feature out: Note that casync actually stores substantially more file metadata than mtree files can express, though.

They have very different usecases and semantics. When implemented, the tool would might become an alternative to restic or tarsnap. However, the codebases are distinct and without interdependencies, and casync works fine both on systemd systems and systems without it.

The serialization casync implements places a focus on reproducability, random access, and metadata control. At the moment not. How precisely to choose both values is left to reasearch by the user, for now. Amazon WorkSpaces allows you to access a virtual desktop in the cloud from the web and from a wide variety of desktop and mobile devices. In these environments, organizations sometimes need the ability to manage the devices which can access WorkSpaces.

For example, they may have to regulate access based on the client device operating system, version, or patch level in order to help meet compliance or security policy requirements. You can implement policies to control which device types you want to allow and which ones you want to block, with control all the way down to the patch level.

To carry out the project, SK pumping units were installed at the wells. Cameron delivered sucker rods with industry-best tensile strength properties, which were coupled to wear-resistant pumps featuring top attachment, chrome-plated cylinders, tungsten carbide saddles and titanium carbide balls.

To reduce the amount of gas in the pump, the SRP unit was installed below the perforation interval installation depth averaged 2, m.

As opposed to the pilot project, the scale-up relies on D-super sucker rods of Russian make, while the pumps will be supplied by Weatherford and Cameron. In addition, they have studied the manufacturing quality of various pump components plunger stems, plungers, cylinders, and valve pairs. In view of these findings, work has commenced on drafting new corporate regulations regarding SRP units Table 4. This new approach to pump quality, combined with the above research, has resulted in a decision to no longer acquire domestically produced SRP units, and since the Company has been procuring only more efficient imported pumps.

Meanwhile, Russian manufacturers capable of producing competitive quality articles have not been fully excluded from the search for the most efficient solutions as regards SRP operations. In addition, pumps of PKNM make featuring a nitrogen-hardened cylinder are slated to undergo field trials in Orenburg.

Improving well drilling performance in discontinuity zones and productive reservoirs with poor porosity and permeability is a key objective of Verkhnechonskneftegas VCNG. The technology of multi-stage hydraulic fracturing, designed to stimulate oil inflow in areas featuring inadequate poroperm properties, has come to be one of the most efficient solutions to this problem.

The technology has been significantly improved by using disintegrating frac balls to isolate target intervals. The optimum solution to achieve vertical continuity of such zones is horizontal hole fracturing. In such case, a proppant-filled vertical fracture creates sustainable hydraulic continuity across various bands. Pilot Project In , during the search for a solution to problems arising from excessive reservoir discontinuity, VCNG began testing multistage frac technology.

The borehole needed no extra preparation such as reaming. The FracPoint assembly included frac sleeves to connect the casing string space to the reservoir following a frac, plus annular swellable packers to isolate frac intervals Fig. The first-stage frac was carried out using a m long screen liner, in order to make the completion assembly more affordable while avoiding process risks. The second stage occurred at the frac sleeve location Fig.

VCNG tried out several types of such seal assemblies. In particular, a G locator tubing seal stinger was used in , fitted to a polished bore receptacle PBR that comprises a part of the liner assembly. Both assemblies displayed a failure-free record, usability, and interchangeability. Following the multi-stage frac, the well was completed in underbalanced mode using a coil tubing kit.

With the tubing out of the hole, the liquid inflow displaced the ball blocking the bottom part of the screen liner to provide access to the bottomhole zone. The multi-stage fracs carried out in in Well yielded substantial incremental oil rates 4. These results, in turn, led to a decision to continue the trials in Eight candidate wells were identified, with four slated for two-stage fracs and four others selected for three-stage fracs.

The following parameters were achieved in in multi-stage frac technique trials compared to single-stage fracs in newly drilled wells Fig. This technology was successfully piloted in two wells.

It calls for injecting 0. As the ball disintegrates, no 80,0 lengthy milling work needs to be done, while access 70,0 to the bottom hole is cleared in an efficient way. The 60,0 technology trials were found to be successful. However, prior to acid injection, one of the two 50,0 wells was flushed with a potassium chloride solution 40,0 in underbalanced mode using nitrogen. In this flush 30,0 ing operation, the ball was exposed to dynamic loads 20,0 from heated brine flows and proppant particles 50 moving in an ascending flow.

Following the flush job, 10,0 coil tubing was run to the sleeve installation depth 0 0,0 to check the borehole. Hydrate plugs also impeded the trials. In particular, as coil tubing was run to the bottom of Well , the coil tubing string got stuck against a hydrate plug; it took nine days to pry it loose. In addition, remediation work carried out by a workover crew failed to dislodge the seal assemblies linking the tubing string and the liner top in the hole.

Once the hole was perforated and the seal assembly was lifted out of the hole, the conclusion was drawn that a hydrate plug had formed between the tubing conductor attached below the seal assembly, on the one hand, and the liner, on the other hand. Positive Experience Overall, multi-stage fracture treatment using disintegrated frac balls was found to be successful following the above tests and recommended for further use at the VC field.

Its application has greatly shortened the cycle of putting a well on stream, as the scope and complexity of postfrac completion work has been reduced. In addition, as frac balls no longer have to be milled out, this avoids heavy contamination of the highly conductive fracture with flushing fluid. Multi-stage fracture pilots are set to continue in In addition to the tried and tested technology, such pilots will lead the way to other frac options that also yield significant economic benefits.

Work is under way in the following areas: TNK-BP specialists obtained useful experience in production string running with rotation and circulation at an onshore rig operating in the Verkhnechonskoye oil and gas condensate field. Since , when production well drilling got under way at the field, the above fact complicated production string installations in the pay zone for the purpose of borehole consolidation.

Most often, the following problems would arise: However, the problem could not be fully resolved. To cement the casing string, the TorkDrive tool is fitted with a pup joint, sub, TIW check valve, rotating cementing head, pup joint, and a sub that attaches to the high-torque casing collar Fig.

No personnel are required on the rig floor to run the casing; this improves safety in the context of hazardous operations and rules out any injuries to company or contractor personnel. Weatherford 1 2 3 4 In the fourth quarter of , the OverDrive system was applied to run mm casing strings into Verkhnechonskoye wells , and In fact, prior to that instance, casing string rotation had not been used in those wells for cementing purposes.

This technology reduced the well construction cycle by two days thanks to obviating the need for open hole reaming prior to running the production string and ruling out potential issues as the string passed the argillite zone.

The pilot wells had a maximum borehole depth of 2, m with a step-out of up to 2, m. Production well drilling got under way at the Verkhnechonskoye field in Concerted efforts to implement a corporate information search system have been underway at TNK-BP since This system is designed to facilitate daily operations involving large volumes of data for Exploration Division employees.

By spring it had already been improved by implementing automated indexation and classification. They looked into the following systems: Some of its main advantages include the ability to return search results depending on the context, search in different languages used throughout the organization as well as broad settings options. Important information frequently remains undiscovered and it ends up being replaced by obsolete data or information compiled from questionable or even unreliable sources.

Needless to say, decisions taken on the basis of unreliable data leave much to be desired. Corporate search systems come in handy when attempting to resolve issues resulting from difficulties associated with navigating an information flow.

This class of information systems is designed to search in diverse information sources — network resources, portals, websites, databases, information systems ERP, CRM , email messages, etc. There are several critical differences between the corporate search engine and consumer search systems available on the Internet: A program, part of a program, a software bundle or a library, depending on the objectives and implementation.

In addition, FAST is a platform geared towards search-driven applications. It served as the framework for developing an information search and classification system for arrays of geological and geophysical data Fig. This search server allows users to adapt the Fig. Effect and Prospects It is expected that once deployed the corporate information search system will have a significant impact on the whole business, in particular, by slashing labor costs 50 to 60 percent associated with the search of geological and geophysical data stored on various network resources.

Thus, the decision-making process will be based on information complete to the fullest extent while the risk of using obsolete or low-quality data would be conspicuously diminished. Another important factor is that the FAST platform, deployed at the Exploration Division, may be expanded to support data application integration scenarios the search system may be configured to search for any types of data for any other Company units.

The platform can also be used in other corporate information systems rolled out at the Company. For example, it can be connected to the Navigator intranet portal, which would considerably enhance search relevance.

The four-year experience of introducing the Directum enterprise content management ECM system across TNK-BP proves that streamlined documentation flow is a key component in the successful implementation of its capital projects.

Against the backdrop of this success, the Company intends to expand the footprint of this software across a number of target subsidiaries TS. The need for a refined system of data exchange is particularly urgent in the context of major long-term projects where documents are issued by the tens of thousands, if not hundreds of thousands.

The benefits of using high-performance enterprise content management systems are obvious, as properly structured data-sharing makes for superb results in project management and contributes to cost cutting and timely completion of project phases.

Access to any item such as a folder or document is governed by reference to the rights vested in a given user, which are assigned on a document-by-document basis, by the relevant specialist of the Documentation Control Section. Depending on his or her access rights, the user may perform certain actions involving the document.

The system enables documents to be linked in order to quickly access documents related to a single topic going forward. The system also allows users to view the history of each document by logging all events affecting the document and showing who did what and when.

Document endorsement is a key process in controlling design and estimate documentation. In the Directum system, this process is executed in the form of collective review which is a process whereby the project team examines the document for the purpose of endorsement, comments, etc.

Over time, its footprint has expanded and over the last two years the system was rolled out to benefit the major projects of Orenburgneft currently, an extra copy of the project documentation control system is located on a test server ; and in system rollout for the target capital projects of TNK-Nyagan. This business process comprises: Their document flow is controlled by disparate non-specialized software or manually; as a result, documents are often lost, while numerous documents of uncertain purpose and provenance keep piling up.

Implementation of the Directum system made it possible to set in place an organizational process management mechanism in support of project document development, endorsement, approval, and storage; this is hugely important considering the constantly ballooning volume of documentation that gets produced.

Another drawback involves lack of proper control over compliance discipline and inability to properly track document locations. Adverse consequences may also assume the form of financial losses and reputational damage.

This situation is further complicated by the fact that business process participants are often remotely located and lack a common information space. As part of implementing the major project management module, special document templates have been created and standard routes set up. The module has not only made for establishing a common order as regards documentation storage, but also formulated a mechanism for managing certain processes involved in the development, endorsement, approval, and storage of project documents, which is all the more crucial considering the current volume of documentation generated.

Receive an e-mail notification of the need to consider a document package received from the engineering service provider. Log on to the system using the enclosed link, examine the documents and record their comments on the collective review sheet.

Signs a cover sheet and sends it to the engineering service, checking document receipt. Uploads the documents and the cover sheet to Directum. Design Engineering and Technical Review Team. Directum features a fairly extensive feature set, including: E-room resources have been allocated to enable information sharing with consultants and contractors. Documentation received from contractors and subject to checkout is entered into a structured database; each document is assigned a unique number so that any document can easily be located not just by project title alone.

Plus, there is a feature whereby more than one version of a document can be created to track all changes or revisions the document may have undergone. In that context, the system clearly specifies version of the document that is up to date at any given point in time. In case any contractor-furnished document has to be checked out by the customer, collective review sheets are provided and communicated to the examiners in the form of tasks, along with the documents to be checked.

Directum creates a common information space that rules out the possibility that an outdated version of the document could be checked; within the system, specific users may be named to check documentation or provide input; furthermore, this arrangement regulates the documentation review timeline and provides for revision management.

The enterprise content management system in question offers numerous advantages, as found by numerous specialists of the TNK-BP units that have already installed Directum. According to Alexander Myasoedov, the use of the major project management module highlights the following advantages of Directum: Whatever Company office the user may be at, any document can be retrieved or opened by using an e-mail link, while the required version of the document is accessible for work.

Overall, Directum is a powerful project management tool. Applying modern approaches to company management requires maximum realization of staff potential, since nowadays staff is the key factor behind the efficiency with which all other company resources are used.

Managers have an interest in the professional growth of their staff at every level, including the development of a succession pool — another important focus area.

Using the “Powered by Raspberry Pi” brand