![]() method implemented by an information processing system and information processing system
专利摘要:
PORTRAIT OF IMAGES OF VIRTUAL MACHINES BETWEEN THE PLATFORMS. In one embodiment, an approach is provided that differentiates a source topology model associated with a source platform and a target topology model associated with a target platform. This differentiation is performed by a processor and results in a difference in topology. An operation on a workflow model is obtained from a resource library, the operation being associated with the difference in topology. At least part of the resource library is stored on a persistent storage medium. The operation to implement a part of a solution is passed on for implementation. The implemented part of the solution includes a target image compatible with the target platform. 公开号:BR112012018768B1 申请号:R112012018768-6 申请日:2010-12-14 公开日:2020-12-22 发明作者:Indrajit Poddar;Igor Sukharev;Alexey Miroshkin;Vladislav Borisovich Ponomarev;Yulia Gaponenko 申请人:International Business Machines Corporation; IPC主号:
专利说明:
Field of the Invention [0001] The invention relates to the field of distributed computer systems. In particular, the invention relates to a method and system for portability of virtual images between platforms. Background of the Invention [0002] Cloud computing is a term that describes internet-based services. Internet-based services are hosted by a service provider. Service providers can provide hardware infrastructure or software applications to requesting customers on a computer network. Requesting customers can access software applications using traditional client-based "browser" software applications, while software (instructions) and data are stored on servers maintained by cloud computing providers. [0003] For a service provider to be able to enjoy running applications in a cloud computing environment, the service provider needs to move its existing applications to the cloud computing environment. Alternatively, a service provider may need to update their applications from one computing environment to another. [0004] Porting applications from one environment to another is often time-consuming and difficult to do. The portability of platform-independent solutions in one or more virtual images from one cloud provider to another is difficult because the image formats are specific to the hypervisor technology supported by the cloud computing environment. In addition, several tedious manual configuration steps are required in the guest operating system to suit cloud-specific hypervisor requirements, and often base images are not available with dependent software components partially installed and configured in the appropriate versions, etc. . [0005] Document US0090282404 describes a provisioning server to automatically configure a virtual machine (Virtual Machine-VM) according to the user's specifications and deploy the VM on a physical host. [0006] The user can choose from a list of pre-configured and prepared VMs to implement or can select which hardware, operating system and applications he would like the VM to have. The provisioning server then configures the VM accordingly if the desired configuration is available or applies heuristics to configure a VM that best matches the user's request if it is not. The invention also includes mechanisms for monitoring the state of VMs and hosts, for migrating VMs between hosts and creating a network of VMs. [0007] US 7356679 describes how a source image of the hardware and software configuration of a source computer, including the status of at least one source disk, is automatically captured. The source computer may remain unprepared and require no program to facilitate cloning and reconfiguring the computer. The source image is automatically analyzed and the target computer's hardware configuration is determined. The source image is modified as needed for compatibility with the target computer, or for customization and, after possible modification, the source image is implemented on the target computer. Either or both of the source and destination computers can be virtual machines. [0008] However, none of the prior state of the art solutions describes a medium in which to port a partial / complete solution that includes one or more virtual images with checks / compatibility information from one cloud computing environment to another. Summary of the Invention [0009] In one embodiment, an approach is provided that differentiates a source topology model associated with a source platform and a target topology model associated with a target platform. This differentiation is performed by a processor and results in a difference in topology. An operation on a workflow model is obtained from a resource library, the operation being associated with the difference in topology. At least part of the resource library is stored on a persistent storage medium. The operation to implement a part of a solution is passed on for implementation. The implemented part of the solution includes a target image compatible with the target platform. [0010] The precedent is a summary and therefore contains, by necessity, simplifications, generalizations and omissions of details; consequently, those skilled in the art will appreciate that the summary is illustrative only and is not intended to be limited in any way. Other aspects, characteristics and advantages of the invention, as defined only by the claims, will become apparent in the detailed, non-limiting description below. [0011] Seen from a first aspect, the present invention provides a method implemented by an information processing system that comprises: differentiating a source topology model associated with a source platform and a destination topology model associated with a target platform that results in a topology difference, in which at least part of the differentiation is performed by a processor; obtain an operation in a workflow model from a resource library, where the operation is associated with the difference in topology and where at least a part of the resource library is stored in a persistent storage medium; and transmitting the operation to implement at least part of a solution, where the implemented part of the solution includes a virtual target image compatible with the target platform. [0012] Preferably, the present invention provides a method which further comprises: implementing part of the solution on the target platform when executing the operation. [0013] Preferably, the present invention provides a method in which an implementation result includes a portability of the solution from the source platform to the destination platform. [0014] Preferably, the present invention provides a method in which the source platform is a first cloud and in which the destination platform is a second cloud. [0015] Preferably, the present invention provides a method in which the solution is a composite solution and in which the second cloud includes a plurality of clouds. [0016] Preferably, the present invention provides a method in which the source platform is a private cloud and in which the destination platform is a public cloud. [0017] Preferably, the present invention provides a method which further comprises: searching for metadata for at least one base virtual image metadata that is associated with the target platform. [0018] Preferably, the present invention provides a method which further comprises: searching for metadata in one or more libraries of virtual images specific to the cloud. [0019] Preferably, the present invention provides a method which further comprises: retrieving input parameters used in the search, in which the input parameters are provided by the destination topology model. [0020] Preferably, the present invention provides a method which further comprises: retrieving one or more virtual base image descriptions from the metadata stored in the resource library in response to the search. [0021] Preferably, the present invention provides a method in which the source platform is a first cloud and in which the destination platform is a second cloud, the method further comprising: retrieving, from the resource library, a first set of model units that correspond to the original platform; retrieve, from the resource library, a second set of model units that correspond to the target platform, in which the differentiation results in one or more model units in common and one or more different model units; reuse a first set of one or more workflow steps that correspond to each of the model units in common; retrieve, from the resource library, a second set of one or more workflow steps that correspond to one or more of the different model units; and create the workflow model using the first set of reused workflow steps and the second set of retrieved workflow steps. [0022] Preferably, the present invention provides a method which further comprises: associating the difference in topology to the source topology model and the destination topology model; and store the topology difference and associations in the resource library as a patch. [0023] Preferably, the present invention provides a method which further comprises: receiving a second source topology model that is partly in common with the source topology model and a second destination platform which is partly in common with the destination platform ; search for one or more patches, including the patch, in the resource library, where the search includes the associated source topology model; retrieve the resource library patch in response to the search; and apply the recovered patch to the second source topology model, resulting in a second target topology model associated with the second target platform. [0024] Preferably, the present invention provides a method in which the solution is a composite solution comprised of multiple virtual parts, in which a virtual part is comprised of a model of a virtual image and its constituent guest OS, middleware and model units application, where each virtual part is implemented on a different target platform and where each target platform corresponds to a public cloud or a private cloud. [0025] Preferably, the present invention provides a method which further comprises: transmitting a plurality of operations, including the operation, to implement a complete solution. [0026] Preferably, the present invention provides a method in which the differentiation further comprises: identifying a first set of model units that correspond to the original platform; identify a second set of model units that correspond to the target platform; comparing the first set of model units with the second set of model units, the comparison resulting in a set of one or more modified model units and a set of one or more model units in common; retrieve a first set of automation step models from the source topology model that corresponds to the common model units; search the resource library for the modified model units, the search resulting in a second set of automation step models that correspond to the modified model units; and include the first and second sets of automation step models in the workflow model. [0027] Preferably, the present invention provides a method in which differentiation results in the identification of one or more new units, in which new units are found in the target topology model and are not found in the source topology model and where the method also includes: searching the resource library for new units. [0028] Preferably, the present invention provides a method in which the operation to implement the solution part allows the solution part to be transferred from one cloud provider to another cloud provider. [0029] Preferably, the present invention provides a method in which the operation to implement the solution part defines a security firewall. [0030] Preferably, the present invention provides a method in which the operation to implement the solution part copies the target virtual image on the target platform. [0031] Preferably, the present invention provides a method in which the source platform is a first hypervisor that runs on a first set of one or more computer systems, where the target platform is a second hypervisor that runs in a second set of one or more computer systems and where the first and second hypervisors are different types of hypervisors. [0032] Preferably, the present invention provides a method in which the source and destination topology models each include metadata describing one or more software application components, a middleware software application component and a guest operating system component. [0033] Preferably, the present invention provides a method in which the metadata included with the source and destination topology models each includes one or more virtual image implementation parameters, in which the source topology model includes source topology model units of source configuration parameters that correspond to a source cloud and source credentials and source service endpoints, where the target topology model includes target topology model units of configuration parameters targets that correspond to a target cloud and target credentials and target service endpoints. [0034] Preferably, the present invention provides a method which further comprises: associating each of the topology model units of origin with a model of automation step of origin that details one or more of the operations used to implement the topology model unit associated origin; associate each of the target topology model units with a target automation step model that details one or more of the operations used to implement the associated target topology model unit; and store each of the target automation step models in the workflow model. [0035] Preferably, the present invention provides a method in which the execution of the operation to implement the part of the solution allows "multi-tenancy" (multiple locations). [0036] Preferably, the present invention provides a method in which the execution of the operation to implement the part of the solution allows for variable workloads. [0037] Seen from another aspect, the present invention provides an information processing system comprising: one or more processors; a memory accessible by at least one of the processors; a persistent storage medium accessible by at least one of the processors; a network interface that connects the information processing system to a computer network, where the network interface is accessible by at least one of the processors; and a set of instructions stored in memory and executed by at least one of the processors to perform actions of: differentiating a source topology model associated with a source platform and a target topology model associated with a target platform that results in a difference in topology; obtaining an operation on a workflow model from a resource library, where the operation is associated with the difference in topology and where at least part of the resource library is stored in the persistent storage medium; and transmitting the operation to implement at least part of a solution, where the implemented part of the solution includes a virtual target image compatible with the target platform. [0038] Preferably, the present invention provides an information processing system in which the actions further include: implementation of the part of the solution on the target platform that performs the operation. [0039] Preferably, the present invention provides an information processing system in which the source platform is a first cloud and in which the destination platform is a second cloud. [0040] Preferably, the present invention provides an information processing system in which the actions further comprise: searching for metadata for at least one basic virtual image metadata that is associated with the target platform. [0041] Preferably, the present invention provides an information processing system in which the actions further comprise: retrieving one or more basic virtual image descriptions from the metadata stored in the resource library in response to the search. [0042] Preferably, the present invention provides an information processing system in which the source platform is a first cloud, in which the destination platform is a second cloud and in which the actions further comprise: recover, from the resource library, a first set of model units that correspond to the source platform; retrieving, from the resource library, a second set of model units that correspond to the target platform, in which the differentiation results in one or more model units in common and one or more different model units; reuse a first set of one or more workflow steps that correspond to each of the model units in common; retrieve, from the resource library, a second set of one or more workflow steps that correspond to one or more of the different model units; and create the workflow model using the first set of reused workflow steps and the second set of retrieved workflow steps. [0043] Preferably, the present invention provides an information processing system in which the actions further comprise: associating the topology difference to the source topology model and the destination topology model; and store the topology difference and associations in the resource library as a patch. [0044] Preferably, the present invention provides an information processing system in which the actions further comprise: receiving a second source topology model that is partially in common with the source topology model and a second destination platform that is partially in common with the target platform; search for one or more patches, including the patch, in the resource library, where the search includes the associated source topology model; retrieve the resource library patch in response to the search; and apply the recovered patch to the second source topology model, resulting in a second target topology model associated with the second target platform. [0045] Preferably, the present invention provides an information processing system in which the actions further comprise: identifying a first set of model units that correspond to the source platform; identify a second set of model units that correspond to the target platform; comparing the first set of model units with the second set of model units, the comparison resulting in a set of one or more modified model units and a set of one or more model units in common; retrieve a first set of automation step models from the source topology model that corresponds to the common model units; search the resource library for the modified model units, the search resulting in a second set of automation step models that correspond to the modified model units; and include the first and second sets of automation step models in the workflow model. [0046] Preferably, the present invention provides an information processing system in which the differentiation results in an identification of one or more new units, in which the new units are found in the target topology model and are not found in the model topology of origin and in which the actions also include: search the resource library for new units. [0047] Preferably, the present invention provides an information processing system in which the source platform is a first hypervisor that runs on a first set of one or more computer systems, where the destination platform is a second hypervisor that runs on a second set of one or more computer systems and where the first and second hypervisors are different types of hypervisors. [0048] Preferably, the present invention provides an information processing system in which the source and destination topology models each include metadata describing one or more software application components, a software application component middleware and a guest operating system component. [0049] Preferably, the present invention provides an information processing system in which the metadata included with the source and destination topology models each still includes one or more virtual image implementation parameters, in which the model source topology model includes source topology model units of source configuration parameters that correspond to a source cloud and source cloud and source credentials and service endpoints, where the target topology model includes topology model units target configuration parameters that correspond to a target cloud and target credentials and target service endpoints. [0050] Preferably, the present invention provides an information processing system in which the actions further comprise: associating each of the topology model units of origin with a model of automation step of origin that details one or more of the operations used to implement the associated topology model unit of origin; associate each of the target topology model units with a target automation step model that details one or more of the operations used to implement the associated target topology model unit; and store each of the target automation step models in the workflow model. [0051] Seen from another aspect, the present invention provides a method implemented by an information processing system that comprises: obtaining a topology model unit that must be implemented on a target platform; search for a plurality of automation step models stored, in a resource library stored in a persistent storage medium, a selected automation step model that is associated with the topology model unit received, the search performed by one or more processors; obtain one or more implementation operations from the resource library, where the implementation operations obtained are associated with the selected automation step model; and perform the implementation operations obtained to implement the topology model unit on the target platform. [0052] Preferably, the present invention provides a method in which the search further comprises: identifying a set of models of automation steps from the plurality of models of automation steps, in which each of the sets of models of automation step automation is associated with the topology model unit; and comparing the set of automation step models identified with a set of criteria, in which the comparison of results in the identification of the selected automation step model. [0053] Preferably, the present invention provides a method implemented by an information processing system comprising: recovering, using a processor, a source image metadata from a persistent storage medium, in which the image metadata source correspond to a source image associated with a source platform; compare, by the processor, the source image metadata retrieved with one or more available image metadata that correspond to one or more available virtual images associated with a target platform; identify, based on the comparison, one of the available image metadata that is most compatible with the source image metadata; and use the available virtual image that corresponds to the available image metadata identified as a target virtual image compatible with the target platform. [0054] Preferably, the present invention provides a method in which the source image metadata is associated with a source topology model and the target platform is associated with a target topology model. [0055] Preferably, the present invention provides a method in which the source image metadata and the available image metadata include metadata from software components. [0056] Preferably, the present invention provides a method which further comprises: improving the virtual image available before use. [0057] Preferably, the present invention provides a method in which the improvement further comprises: updating one or more software components included in the available virtual image. [0058] Preferably, the present invention provides a method in which the update includes the addition of one of the software components. [0059] Seen from another aspect, the present invention provides a method which comprises: obtaining an operation associated with a topology difference, where the topology difference is the difference between a topology model of origin associated with a platform of origin and a target topology model associated with a target platform; and implementing at least a part of a solution when executing the obtained operation, where the implemented part of the solution includes a target image compatible with the target platform. [0060] Preferably, the present invention provides a method in which the source platform is a first cloud, in which the destination platform is a second cloud and in which the implementation of at least part of a solution comprises: transfer by least part of the solution from the first cloud to the second cloud. [0061] Preferably, the present invention provides a method in which the source platform is a private cloud, in which the target platform is a public cloud and in which the implementation of at least part of a solution comprises: implementing the solution from a private cloud to a public cloud. Brief Description of Drawings [0062] Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which: Figure 1 is a block diagram of an embodiment of an information processing system that serves as a node in an environment cloud computing and in which the methods described here can also be implemented according to a preferred embodiment of the invention; Figure 2 is an embodiment of an extension of the information processing system environment shown in Figure 1 to illustrate that the methods described here can be performed on a wide variety of information processing systems operating in a networked environment. with a preferred embodiment of the invention; Figure 3 is a diagram representing an embodiment of a cloud computing environment according to a preferred embodiment of the invention; Figure 4 is a diagram representing an embodiment of a set of functional abstraction layers provided by the cloud computing environment according to a preferred embodiment of the invention; Figure 5 is a diagram of an embodiment of topology source and target models and automation step models stored in a resource library that are used to transfer a solution from a source platform to a target platform according to a preferred embodiment of the invention; Figure 6 is a flowchart showing the steps taken to use the topology model units to find similarities and differences between the source and destination topology models in order to generate implementation workflows according to a preferred embodiment of the invention. ; Figure 7 is a diagram showing an embodiment of automation step models used to create an exemplary implementation workflow that is implemented in a cloud environment in accordance with a preferred embodiment of the invention; Figure 8 is a flow chart showing the steps taken to create a topology model according to a preferred embodiment of the invention; Figure 9 is a flowchart showing the steps taken to create automation step models according to a preferred embodiment of the invention; Figure 10 is a flow chart showing the steps taken to specify input parameters and store in the topology model according to a preferred embodiment of the invention; Figure 11 is a flowchart showing the steps taken to fully specify and implement a running copy of the cloud-based application according to a preferred embodiment of the invention; Figure 12 is a flowchart showing the steps taken to reuse resources stored in the resource library and implement the solution in a target cloud environment using the resources reused in accordance with a preferred embodiment of the invention; Figure 13 is a flowchart showing the steps taken to find existing topology units that match a request, replace specific model units in the cloud, and store new and changed model units in the resource library according to a preferred embodiment of the invention; Figure 14 is a flowchart showing the steps taken to generate an implementation workflow model in accordance with a preferred embodiment of the invention; Figure 15 is a flow chart showing the steps taken to generate an implementation workflow from the model and implement using an implementation mechanism in accordance with a preferred embodiment of the invention; and Figure 16 is a flowchart showing the steps taken to generate an implementation workflow from the model and to implement a composite solution in multiple cloud-based environments according to a preferred embodiment of the invention. Detailed Description of the Invention [0063] For convenience, the Detailed Description has the following sections: Section 1: Cloud Computing Definitions and Section 2: Detailed Implementation. Section 1: Cloud Computing Definitions [0064] Many of the following definitions have been derived from the "Draft NIST Working Definition of Cloud Computing" by Peter Mell and Tim Grance, dated October 7, 2009. [0065] Cloud computing is a model to allow convenient and on-demand network access to a shared pool of configurable computing resources (for example, networks, servers, memory, applications and services) that can be quickly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is comprised of at least five characteristics, at least three service models and at least four implementation models. [0066] The features are as follows: Self-service on demand: a consumer can provide unilateral computing resources, such as server time and network memory, as needed automatically, without the need for human interaction with each service provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous light or heavy client platforms (for example, cell phones, laptops and PDAs). Resource sharing: the provider's computing resources are grouped to serve multiple consumers using a "multi-tenant" model, with different physical and virtual resources allocated and dynamically reassigned according to consumer demand. There is a sense of location independence as the customer generally has no control or knowledge about the exact location of the resources provided, but can specify the location at a higher level of abstraction (for example, country, state or data center). Examples of resources include memory, processing, memory, network bandwidth and virtual machines. Rapid Flexibility: Capabilities can be quickly and flexibly provisioned, in some cases automatically, to scale readily, and released quickly to scale quickly. For the consumer, the capacities available for supply often seem to be unlimited and can be purchased in any quantity at any time. Measured service: Cloud systems control and optimize the use of resources, leveraging a measurement capability at some level of abstraction appropriate to the type of service (for example, storage, processing, bandwidth and active user accounts). The use of resources can be monitored, controlled and reported, providing transparency for both the provider and the consumer of the service used. [0067] Service models are as follows: Cloud Software as a Service (Cloud Software as a Service - SaaS): the ability provided to the consumer is to use the provider's applications running on a cloud infrastructure. Applications are accessible from multiple client devices through a lightweight client interface, such as a web browser (for example, web-based email). The consumer does not manage or control the underlying cloud infrastructure, including network, servers, operating systems, memory or even individual application resources, with the possible exception of limited user-specific application configuration settings. Cloud Platform as a Service (PaaS): the capacity provided to the consumer is to implement applications created by the consumer or purchased using programming languages and tools supported by the provider in the cloud infrastructure. The consumer does not manage or control the underlying cloud infrastructure, including networks, servers, operating systems or memory, but has control over the applications deployed and, possibly, application hosting environment configurations. Cloud Infrastructure as a Service (IaaS): the capacity provided to the consumer is to provision processing, memory, networks and other fundamental computing resources, where the consumer can implement and run arbitrary software, which may include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure, but has control over operating systems, memory, deployed applications and possibly limited control over selected network components (for example, host firewalls). [0068] Implementation models are as follows: Private Cloud: The cloud infrastructure is operated exclusively for an organization. It can be managed by the organization or by third parties and it can exist on site or off site. She has security mechanisms in place. An example of a security mechanism that can be put in place is a firewall. Another example of a security mechanism that can be put in place is a virtual private network (Virtual Private Network- VPN). Community Cloud: The cloud infrastructure is shared by several organizations and supports a specific community that has common concerns (for example, mission, security requirements, policy and compliance considerations). It can be managed by organizations or third parties and can exist on-site or off-site. Public Cloud: The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization that sells cloud services. Hybrid Cloud: The cloud infrastructure is a composition of two or more clouds (private, community or public) that remain unique entities, but are joined by standardized or proprietary technology that allows portability of data and applications (for example, cloud overflow to load balancing between clouds). [0069] A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity and semantic interoperability. [0070] A virtual image represents a virtual machine in a file system and can include configuration parameters as needed to run it as a virtual machine. The virtual image can be executed by a software component called a hypervisor that can be located on a physical machine and can supplement an operating system of the physical machine. The virtual image can also be called a machine image or a virtual machine image. [0071] A virtual machine is a software implementation of a machine that runs programs like a physical machine. [0072] An "image" is the state of a computer system and the software running on the computer system. In a hypervisor system, the image can be a "virtual image" because the hypervisor controls access to the computer system's hardware and, from the perspective of a guest operating system or partition, it appears as if the guest operating system / partition controls the entire computer system when, in fact, it is the hypervisor that actually controls access to computer hardware components and manages the sharing of computer hardware resources between multiple partitions (eg guest operating systems, etc.) . Section 2: Detailed Implementation [0073] As will be appreciated by those skilled in the art, the detailed implementation can be realized as a computer program system, method or product. Consequently, the embodiments can take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, etc.) or an embodiment that combines the software and hardware aspects that can all be generally said here as a "circuit", "module" or "system". In addition, the embodiments may take the form of a computer program product incorporated in one or more computer-readable media having the computer-readable program code incorporated therein. [0074] Any combination of one or more computer-readable media can be used. The computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium. A computer-readable storage medium can be, for example, but without limitations, an electronic, magnetic, optical, electromagnetic, infrared or semiconductor system, an apparatus or device or any suitable combination of the above. More specific examples (a non-exhaustive list) of computer-readable storage media include the following: an electrical connection that has one or more wires, a portable floppy disk, a hard drive, a Random Access Memory - RAM), a read-only memory (Read-Only Memory - ROM), a programmable and erasable read-only memory (Erasable Programmable Read-Only Memory - EPROM or "flash" memory), an optical fiber, a compact disc, memory compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device or any suitable combination of the above. In the context of this document, a computer-readable storage medium may be any tangible medium that may contain or store a program for use by or in connection with a system, apparatus or instruction execution device. [0075] A computer-readable signal medium may include a data signal propagated with a computer program code embedded in it, for example, in the base band or as part of a carrier wave. Such a propagated signal may take any of a variety of forms including, but not limited to, electromagnetic, optical or any suitable combination thereof. A computer-readable signal medium can be any computer-readable medium that is not a computer-readable storage medium and that can communicate, propagate, or transfer a program for use by or in connection with a system, device, or execution device instruction. [0076] The program code embedded in a computer-readable medium can be transmitted using any appropriate medium including, but not limited to, wireless devices, cables, fiber optic cables, RF, etc., or any suitable combination of the above. [0077] Computer program code for performing operations can be written in any combination of one or more programming languages, including an object-oriented programming language, such as Java, Smalltalk, C ++ or similar and procedural programming languages conventional languages, such as the "C" programming language or similar programming languages. The program code can be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer can be connected to the user's computer through any type of network, including a local area network (Local Area Network-LAN) or a wide area network (Wide Area Network-WAN), or the connection can be made to an external computer (for example, via the Internet using an Internet Service Provider). [0078] Embodiments with reference to flowchart illustrations and / or block diagrams of methods, apparatus (systems) and computer program products are described below. It will be understood that each block of the flowchart illustrations and / or block diagrams and combinations of blocks in the flowchart illustrations and / or block diagrams can be implemented by computer program instructions. These computer program instructions can be provided to a general-purpose computer processor, application-specific computer or other programmable data processing device to produce a machine, so that the instructions, which are executed through the computer's processor computer or other programmable data processing apparatus, create means to implement the functions / acts specified in the flowchart and / or block or blocks of the block diagram. [0079] These computer program instructions can also be stored on a computer-readable medium that can control a computer, another programmable data processing device or other devices to function in a particular way, so that the instructions stored in the medium computer-readable produce a manufacturing article that includes instructions that implement the function / act specified in the flowchart and / or block or blocks of the block diagram. [0080] Computer program instructions (functional descriptive material) can also be loaded onto a computer, another programmable data processing device or other devices to cause a series of operational steps to be performed on the computer, another programmable device or other devices for producing a computer-implemented process, so that the instructions which are executed on the computer or other programmable device provide processes for implementing the functions / acts specified in the flowchart and / or block or blocks of the block diagram. [0081] Certain specific details are set out in the following description and figures to provide an exhaustive understanding of various embodiments. Certain well-known details, often associated with computing and software technology, are not presented in the description below, however, to avoid unnecessarily obscuring the various embodiments. In addition, those skilled in the relevant technique will understand that they can practice other embodiments without one or more of the details described below. Finally, although various methods are described with reference to steps and sequences in the description below, the description as such is to provide a clear implementation of the embodiments and the steps and sequences of steps should not be taken as necessary for the practice of the embodiments. Instead, the following is intended to provide a detailed description of one or more embodiments and should not be taken as limiting, instead, any number of variations may fall within the scope that is defined by the claims that follow the detailed description . [0082] The following detailed description, in general, will accompany the summary, as presented above, explaining and further expanding the definitions of the various aspects and achievements, as needed. For this purpose, the present detailed description first presents an example of a computing environment in Figure 1 that is suitable for implementing the software and / or hardware techniques associated with an embodiment. An embodiment of a network environment is illustrated in Figure 2 as an extension of the basic computing environment to emphasize the modem computing techniques that can be performed on several different devices. [0083] Referring now to Figure 1, a schematic representation of an embodiment of an information processing system that can serve as a cloud computing node is shown. Cloud computing node 10 is just one example of a suitable cloud computing node and is not intended to suggest any limitations regarding the scope of use or functionality described here. However, cloud computing node 10 is capable of being implemented and / or performing any of the functions presented in section I above. [0084] In the cloud computing node 10 there is a computer / server system 12, which is operational with numerous other environments or configurations of general purpose computing system or specific application. Examples of known computing systems, environments and / or configurations that may be suitable for use with the computer / server system 12 include, but are not limited to, personal computer systems and server computer systems, light clients, heavy clients, devices laptops or laptops, multiprocessor systems, microprocessor based systems, set top boxes, programmable consumer electronic devices, networked PCs, minicomputer systems, mainframe computer systems and distributed cloud computing environments that include any of the systems or devices above and so on. [0085] The computer / server system 12 can be described in the general context of instructions executable in a computer system, such as program modules, which are being executed by a computer system. In general, program modules include routines, programs, objects, components, logic, data structures and so on that perform particular tasks or implement certain types of abstract data. The computer / server system 12 can be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are connected via a communications network. In a distributed cloud computing environment, program modules can be located on local and remote computer system storage media, including memory storage devices. [0086] As shown in Figure 1, the computer / server system 12 on the cloud computing node 10 is shown in the form of a general purpose computing device. The components of the computer / server system 12 may include, but are not limited to, one or more processors or processing units 16, a system memory 28 and a bus 18 that couples various system components, including system memory 28 to the processor 16. [0087] Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a graphics accelerator port and a local processor or bus that uses any one of a variety of bus architectures. By way of example and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA bus (EISA), Video Electronics Standards Association (VESA) local bus and Peripheral Component Interconnects (PCI bus) ). [0088] The computer / server system 12 typically includes a variety of readable media in the computer system. Such media can be any available media that is accessible by computer / server system 12 and includes both volatile and non-volatile media, removable and non-removable media. [0089] System memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and / or cache memory 32. The computer / server system 12 may further include other removable / non-removable, volatile / non-volatile computer storage media. For example, a hard disk drive 34 can be provided for reading and writing to a non-removable, non-volatile magnetic medium (not shown and typically referred to as a "hard disk"). Although not shown, a magnetic disk drive for reading and writing to a non-volatile removable magnetic disk (for example, a "floppy disk") and an optical disk drive for reading or writing to a non-volatile removable optical disk, such as a CD-ROM, DVD-ROM or other optical medium, can be supplied. In such cases, each can be connected to bus 18 via one or more data media interfaces. As will be further illustrated and described below, memory 28 may include at least one program product that has a set (for example, at least one) of program modules that are configured to perform the functions described here. [0090] The program / utility 40 that has a set (at least one) of program modules 42 can be stored in memory 28, by way of example and not as a limitation, as well as an operating system, one or more application programs , other program modules and program data. Each of the operating system, one or more application programs, other program modules and program data, or some combination thereof may include an implementation of a network environment. Program modules 42 generally perform the functions and / or methodologies as described here. [0091] The computer / server system 12 can also communicate with one or more external devices 14, such as a keyboard, a pointer device, a monitor 24, etc .; one or more devices that allow the user to interact with the computer / server system 12; and / or any devices (for example, network card, modem, etc.) that allow the computer / server system 12 to communicate with one or more other computing devices. Such communication can occur through I / O interfaces 22. In addition, the computer / server system 12 can communicate with one or more networks, such as a local area network (LAN), a general wide area network (WAN) and / or a public network (for example, the Internet) via network adapter 20. As shown, network adapter 20 communicates with the other components of the computer / server system 12 via bus 18. It should be understood that , although not shown, other hardware and / or software components may be used in conjunction with the computer / server system 12. Examples include, but are not limited to: microcode, device drivers, redundant processing units, external hard drive arrangements, RAID systems, tape drivers and data file storage systems, etc. [0092] Figure 2 provides an extension of the information processing system environment shown in Figure 1 to illustrate that the methods described here can be performed on a wide variety of information processing systems operating in a networked environment. with an embodiment. The types of information processing systems range from small portable devices, such as a portable computer / mobile phone 210 to large "mainframe" systems, such as a mainframe computer 270. Examples of portable computer 210 include personal digital assistants (PDAs), personal entertainment devices such as MP3 players, portable televisions and compact disc players. Other examples of information processing systems include pen, or tablet, computer 220, laptop or notebook, computer 230, workstation 240, personal computer system 250 and server 260. Other types of information processing systems that are not shown individually in Figure 2 are represented by the information processing system 280. As shown, the various information processing systems can be networked using the computer network 200. The types of computer networks that can be used to interconnect the various Information processing systems include Local Area Networks (LANs), Wireless Local Area Networks (WLANs), the Internet, the Public Switched Telephone Network (PSTN) , other wireless networks and any other network topology that can be used to interconnect information processing systems. Many information processing systems include non-volatile data memory, such as hard disk drives and / or non-volatile memory. Many of the information processing systems shown in Figure 2 describe separate non-volatile data memories (server 260 uses non-volatile data memory 265, mainframe computer 270 uses non-volatile data memory 275 and the processing system information 280 uses non-volatile data memory 285). [0093] Non-volatile data memory can be a component that is external to the various information processing systems or it can be internal to one of the information processing systems. In addition, the non-volatile removable memory device 145 can be shared between two or more information processing systems using various techniques, such as connecting the removable non-volatile memory device 145 to a USB port or other connector of the information processing systems. information. In addition, computer network 200 can be used to connect various information processing systems to cloud computing environments 201 that include cloud computing environment 205 and any variety of other cloud computing environments. As described in Figures 3 and 4, a cloud computing environment 300 comprises a series of networked information processing systems (nodes) that work together to provide the cloud computing environment. The cloud computing environments 201 each provide layers of abstraction shown in Figure 4. An abstraction layer comprises a hardware and software layer 400, a virtualization layer 410, a management layer 420 and a volume layer of work 430. Components within the various 400-430 layers can vary from one cloud environment to another. An example of components found within the various layers is shown in Figure 4. [0094] Referring now to Figure 3, an illustrative 201 cloud computing environment is represented. As shown, the cloud computing environment 201 comprises one or more cloud computing nodes 10 with which computing devices such as, for example, a personal digital assistant (PDA) or cell phone 210, laptop computer 250, computer laptop 290, automotive computer system 230 and other types of client devices shown in Figure 2 communicate. This allows infrastructure, platforms and / or software to be offered as services (as described above in Section 1) from the cloud computing environment 201 so as not to require that each customer separately maintain such resources. It should be understood that the types of computing devices shown in Figures 2 and 3 are intended to be illustrative only and that the cloud computing environment 201 can communicate with any type of computerized device over any type of network and / or connection network / addressable (for example, using a web browser). [0095] As the inventors here recognized, virtual images of origin and destination for different cloud providers (or hypervisor) may have incompatible hardware architectures, hypervisor technologies, types and / or versions of the operating system and / or middleware. The disk partitions in a virtual image can be specific to a cloud provider (or hypervisor). Direct copying of content with minor customizations may not work. For example, Amazon EC2 supports XEN virtual images for x86 hardware, while an IBM cloud with P series servers supports only P system images. [0096] The present inventors also recognized that, in some situations, specific cloud (or hypervisor) configurations, such as firewalls and bulk storage volumes, cannot be added to the portability solution due to API differences. For example, Amazon EC2 offers configuration options for securing bulk storage volumes for running copies, but a VMware-based private cloud may not provide this option. [0097] The present inventors also recognized that solutions that include multiple virtual images may only need to be partially transferred to a different cloud (or hypervisor). For example, a solution with a business logic layer in one virtual image and a database layer in another virtual image may require that only the virtual image of the business logic layer be transferred to a public cloud and the virtual image database layer remains in a private cloud due to data privacy issues. However, the inventors also recognized that, in some situations, it may be desirable to transfer an entire solution to a different cloud (or hypervisor). [0098] The present inventors also recognized that it may be desirable to identify and / or view altered, added and / or eliminated solution components / configurations and corresponding altered, added and / or eliminated implementation operations in a provisioning workflow as for portability to a different cloud (or hypervisor). The inventors also recognized that it may be desirable to store these modifications, additions and / or deletions in a patch for the source model. For example, transferring a WebSphere application from a VMware-based private cloud to Amazon EC2 requires changing the base VMware image to a base Amazon Machine Image (AMI) with WebSphere operations and changing for deployment to an AMI image, instead of a VMware image. [0099] As the present inventors have also recognized, the provision of solutions for multiple clouds may be desirable. For example, Amazon EC2 and IBM Development at Test Cloud both offer APIs to copy an image and remotely connect to the running virtual machine securely and perform provisioning solution tasks remotely. [0100] Referring now to Figure 4, an embodiment of a set of functional abstraction layers 400 provided by the cloud computing environment 201 is shown (Figure 3). It should be understood in advance that the components, layers and functions shown in Figure 4 are intended to be illustrative only and the embodiments are not limited to them. As shown, the following corresponding layers and functions are provided: - the hardware and software layer 410 includes hardware and software components. Examples of hardware components include "mainframes", in one example, IBM® zSeries® systems; servers based on the RISC (Reduced Instruction Set Computer) architecture, in an example IBM pSeries® systems; IBM xSeries® systems; IBM BladeCenter® systems; memory devices; networks and network components. Examples of software components include network application server software, in one example, IBM WebSphere® application server software; and database software, in one example, IBM DB2® database software. (IBM, zSeries, pSeries, xSeries, BladeCenter, WebSphere and DB2 are registered trademarks of International Business Machines Corporation in the United States, other countries, or both); - the virtualization layer 420 provides an abstraction layer from which the following virtual entities can be provided: virtual servers; virtual memory; virtual networks, including virtual private networks; virtual applications; and virtual customers; - the management layer 430 provides the functions described below. Resource provisioning allows for the dynamic acquisition of computing resources and other resources that are used to perform tasks within the cloud computing environment. Measurement and pricing allows cost control as resources are used within the cloud computing environment and billing or billing for the consumption of these resources. In one example, these features may include application software licenses. Security allows identity verification for users and tasks, as well as data protection and other features. User portal allows access to the cloud computing environment for users and system administrators. Service level management allows allocation and management of cloud computing resources, so that the required service levels are met. Planning and execution of a Service Level Agreement (SLA) allows the preconfiguration for, and acquisition of, cloud computing resources for which a future requirement is anticipated according to an SLA; - the workload layer 440 allows the functionality for which the cloud computing environment is used. Examples of volumes and the functions that can be provided from this layer include: mapping and navigation; software development and life cycle management; distribution of education in the virtual classroom; analysis of processing data; and transaction processing. As mentioned above, all of the previous examples described in relation to Figure 4 are illustrative only and are not limited to these examples. [0101] Figure 5 is a diagram of an embodiment of source and target topology models and automation step models stored in a resource library to be used to transfer a solution from a source platform to a target platform . In one embodiment, a "solution" is a solution that includes one or more software applications that are run by one or more hardware-based computer systems that are used to satisfy certain functional and non-functional requirements. A software solution includes one or more software applications, as well as their related configuration parameters. In one embodiment, a solution is a complete solution ("turn-key") and an approach of an interruption ("one-stop"). The solution can include multiple applications and can include configuration information affiliated with the applications. A solution can be of a composite nature in that it can include multiple virtual images. Part of a solution can be in a virtual image and part of the solution can be in another virtual image. Resource library 500 is used as a repository for topology and automation data. Topology data describes cloud-based solution components and their relationships. The resource library 500 is used to store data for any number of topology models 510. In one embodiment, the resource library 500 is stored on a persistent storage medium, such as a non-volatile memory device, accessible from a computer system. Each of the 510 topology models describes the data used in a cloud-based solution. When first implementing a cloud-based solution (the "target cloud topology") in a particular cloud environment (the "target cloud"), resources stored in the resource library 500 can be searched to identify a topology already stored in the resource library (the "source cloud topology") that can be used to develop the target cloud topology when reusing multiple source cloud topology model units. In one embodiment, this occurs when transferring a solution from the source cloud (for example, a cloud provided by a first cloud provider) to the destination cloud (for example, a cloud provided by a second cloud provider). [0102] An embodiment of a source platform would be a cloud (for example, Amazon EC2 ™ cloud), where a solution has been implemented. One embodiment of a target platform can be another cloud (for example, IBM Smart Business Development and Test cloud) to which the solution is being transferred. An embodiment of a source platform can be a hypervisor (for example, VMware hypervisor) and an embodiment of the corresponding target platform could be another hypervisor (for example, KVM hypervisor). An embodiment of a source platform can be a physical computer system with disk partitions (for example, IBM pSeries servers) and an embodiment of the corresponding target platform could be another physical computer system (for example, Sun Microsystems server). In addition, platforms can be mixed and matched. For example, an embodiment of a source platform could be a hypervisor and an embodiment of the corresponding target platform could be a cloud. [0103] Although model units that correspond to numerous topology models can be stored in resource library 500, two are shown in Figure 5 - source cloud topology model units 520 and target cloud topology model units 560. Both the source and destination cloud topology models include several model units, such as metadata (522 and 562, respectively), credentials and service endpoint data (524 and 564, respectively) and configuration parameters (526 and 566 , respectively). Model units can be used to represent applications, middleware, guest operating systems, virtual images, configuration parameters and other components of the solution. Each model unit can include metadata, such as virtual image type, ID, configuration parameters, software versions, access credentials, firewall rules, etc. The metadata for the respective topology models includes a series of metadata items, such as the virtual image distribution parameters, metadata about the software included in the topology, metadata about the middleware included in the topology, and metadata about the guest operating system included in the topology. [0104] As shown, automation step 515 models are also stored in resource library 500 and are associated with topology model units. As the name implies, the automation step models describe the automation steps used to implement the various topology model units included in the topology. The 530 source cloud automation step model includes the automation steps used to implement topology model units in the 520 source cloud, while the 570 target cloud automation step model includes the automation steps used to implement to target 560 topology model units. When developing target 560 topology model units and target 570 automation step model units, the differences between source 520 topology model units and target model topology model units are identified along with new model units that exist in the target topology, but not in the source topology. [0105] In addition, the differences include removed model units that exist in the source topology, but not in the destination topology. Resource library 500 is researched to find automation step models for the new and different model units identified in the target topology model. The differences can be stored in resource library 500 as a patch and can be applied to a similar source topology model to create a target topology model. One or more processors can be used to differentiate between the source topology model that is associated with a source platform (for example, source cloud, source hypervisor, etc.) and a target topology that is associated with a target platform (for example, target cloud, target hypervisor, etc.). A topology model includes model topology units. [0106] Topology model units can include unit parameters or attributes and can also include a type. The type can be, for example, a virtual image, middleware, operating system, firewall or any other type known in the art. A hypervisor is software that can be run on an operating system. A hypervisor can run under an operating system. A hypervisor can allow different operating systems or different copies of a single operating system to run on one system at the same time. In other words, a hypervisor can allow a host system to host multiple guest machines. This differentiation results in a topology difference that includes new, modified and removed model units. The topology difference can be a set of topology model units that correspond to the various components of the solution that are different in the target topology model compared to the source topology model. [0107] It may be necessary for the set of topology model units to be modified when transferring the solution from a source platform to a destination platform. A set that includes at least one operation in a workflow model (for example, an automation step model stored in 515 automation step models) is obtained from resource library 500. The operation in a workflow is a feature that can be performed. For example, an operation on a workflow model can be used to install an sMash application on an sMash application server. [0108] As another example, an operation on a workflow model can be used to copy a virtual image. Each operation is associated with one of the model units in the topology difference between the source and destination topology models. In one embodiment, a complete solution (for example, the solution being transferred from the source topology to the destination topology) is implemented by performing one or more of the operations obtained using one or more processors. In one embodiment, a portion of a solution is transferred from the source topology to the destination topology by performing one or more of the operations obtained using one or more processors. The implemented part of the solution includes a target image that is compatible with the target platform (for example, target cloud 590, a target hypervisor, etc.). In one embodiment, compatibility results from a variety of reasons, such as hypervisor technology, hardware architecture, operating system versions, middleware versions, APIs available in different clouds (such as the configuration of firewalls and VPNs) and so on. onwards. In one embodiment, the incompatibility results from a variety of the same reasons, where one or more components, as discussed above, are incompatible from a source platform to a destination platform. [0109] As shown, the source implementation workflow 575 operates to implement the solution in the source cloud 580. The implementation results in a virtual image 582 being loaded into the source cloud and an application 550 deployed on a copy image of running middleware. Likewise, target deployment workflow 585 works to deploy the solution to the target cloud 590. The deployment results in a virtual image 592 being loaded into the target cloud and the application 550 deployed to a middleware image copy. running. [0110] It should be noted that the common application 550 is implemented in both the source cloud 580 and the target cloud 590. In one embodiment, application 550 is a platform-independent application, such as an application written in a platform-independent computer, such as the Java programming language, which runs on a "virtual machine" that is platform dependent. The target image can be identified using the metadata in the virtual image model unit and its units contained in the target topology model. This metadata can be used as an input to fetch metadata for all known virtual images in the target cloud. All of this metadata can be stored in the resource library. A target image can be identified and implemented on the target platform (cloud), so that the application can be deployed on the target platform when executing the target image. In one embodiment, the target image includes the operating system and middleware (for example, the virtual machine) that provides a suitable target environment for the application to work. In this way, a solution currently running in the source cloud and described in the source topology model can be transferred and implemented in the destination cloud with model units similar to those described in the destination topology model. [0111] Figure 6 is a flowchart that shows the steps taken to use model topology units to find similarities and differences between the source and destination topology models in order to generate implementation workflows according to one embodiment. [0112] The common characteristics can be a set of model topology units that correspond to the various components of the solution that are the same in the target topology model in relation to the source topology model. An example of a common feature may be that the type of virtual image represented in a model unit associated with the target platform can be the same as the type of virtual image represented in a model unit associated with the source platform. The common characteristics may be partial in nature, such as when the type is the same for both model units, but some other parameter such as, for example, an image identifier, is different in the model unit associated with the topology model. compared to the model unit associated with the source topology model. The characteristics in common can be of a complete nature, such as when all parameters of the two topology model units are the same and there are no parameters that are different. If the characteristics in common are complete in nature, the associated implementation operations and solution components may not need to be modified when transferring the solution from the source platform to the destination platform. If the characteristics in common are partial in nature, they can be treated as a difference or they can be treated as entirely in common. [0113] The differences can be a set of model topology units that correspond to the various components of the solution that are different in the target topology model compared to the source topology model. An example of a difference may be that the type of virtual image represented on a model unit associated with the target platform may be different from the type of virtual image represented on a model unit associated with the source platform. If there is a difference, it may be necessary to change the associated implementation operations and solution components when transferring the solution from a source platform to a target platform. [0114] Processing starts at 600 when, in step 610, topology model units of origin and topology models are created and stored in resource library 500. In step 620, an automation step model is created for some topology model units that were created for the source topology models in step 610 and these automation models are also stored in resource library 500. In step 625, the models (topology and automation) that were created in steps 610 and 620 are used to generate the implementation workflow 628 for the source cloud 580. [0115] Steps 630 and 640 are similar to steps 610 and 620, however, steps 630 and 640 are directed to a different cloud (destination). In step 630, the target topology model units and topology models are created and stored in the 500 resource library. These target topology model units and topology models are designed to implement the same solution implemented in the source cloud, in the However, the target topology model and topology model units are designed to run the solution on a different "target" cloud. In step 640, an automation step model is created for some of the topology model units that were created for the target topology models in step 630 and these automation models are also stored in the 500 resource library. [0116] In step 650, the source and destination topology models are read from the resource library 500 and compared to identify the differences between the models. These differences may include modified, new or removed units. Modified model units are those units that exist in source and destination models, but that have different parameters or subtypes, for example, a virtual image of a source topology subtype is different from a virtual image of a topology subtype of destiny. The difference in the parameters can include middleware versions. New model units are those units that did not exist in the source topology, but have been added to the target topology (for example, a model topology unit that was not required to implement the solution in the source cloud, but is needed in order to implement the solution in the target cloud, etc.). The model units removed are those that were in the source topology, but were not present in the destination topology. In step 660, resource library 500 is searched for automation step models that correspond to the modified and new units identified. The automation step models found in step 660 are automation step models stored in resource library 500. For example, if a new or modified model unit has been identified, a different topology stored in the resource library may already exist which corresponds to the new or modified model unit identified. In addition, when a model unit is found for the first time, a model step automation can be developed for the model unit found and stored in resource library 500, so that it will be found when step 660 occurs. [0117] In step 670, the implementation workflow 675 in the target cloud 590 is generated using automation step templates identified in step 660 (for new / modified model units) and some of the steps used to generate the workflow of implementation in the source cloud in step 625 (for model units without modification). It should be noted that the deployment operations for the removed units are discarded from the source workflow model. In step 680, the workflows 675 and 628 implementation is performed by the implementation engine 690 resulting in a running copy 582 of the solution in the source cloud 580 and running copy 592 of the solution in the target cloud 590. In one embodiment, the implementation of the implementation workflow is performed by passing the operations included in the workflow model to the 690 implementation engine. [0118] The 690 implementation mechanism can be a software process on the same computer system that performs the generation step in 670 or it can be on a different computer system connected through a computer network. If the same computer system is used, then the operation can be transmitted using an internal operation (for example, via a subroutine call, by executing inline code that handles implementation operations, via a call external program, etc.). If a different computer system is used, then the operation can be transmitted to another computer system through transmission over a network, such as a private computer network (for example, LAN) and / or a public network (for example, the Internet, public switched telephone network (PSTN), etc.). Although an implementation mechanism is shown, different implementation mechanisms can be used. In one embodiment, the automation step models provide a generic representation of the steps used to automate the implementation of the various model units, while the generated implementation workflows (628 and 675) include descriptive functional material (for example, scripts , etc.) designed to be read and processed by the 690 implementation mechanism. [0119] Figure 7 is a diagram showing an embodiment of automation step models used to create an exemplary implementation workflow that is distributed to a cloud environment. The topology model 700 includes topology model units. Automation step models 710 correspond to some of the topology model units. Implementation workflow 720 is generated from automation step models 710 and provides a series of operations to implement the solution. Figure 7 shows an example of a hybrid solution that includes a public target cloud 760 and a private target cloud 780. An example of the public target cloud function would be the publicly accessible client front-end operation from network, such as the Internet. An example of the private target cloud function would be the "back-end" server operation that handles database and LDAP operations. [0120] In the example shown, the implementation workflow includes operations 725 to 750. Operation 725 copies a private machine image to public target cloud 760 and a private target cloud 780. This results in a machine image cloud with OS host 7 68 being copied to public target cloud 760. In one embodiment, operation 725 also copies the machine image to cloud 782 to the private target cloud 780 while, in one embodiment, a machine image Cloud 782 is a common back-end server used by several public target clouds. The cloud machine image 782 copied and running on the private target cloud 780 includes a guest operating system 784 which may be an operating system other than the operating system running on the public target cloud. The 782 cloud machine image can also include the 786 database server (for example, IBM DB2 ™ database server, etc.) under which the database applications operate. The 782 cloud machine image can also include the 788 Lightweight Directory Access Protocol (LDAP) server under which LDAP applications work. [0121] Operation 730 installs a middleware application, such as the IBM Websphere® sMash ™ middleware application on the image copied to the public target cloud. This results in an application server 772 running on the cloud machine image with guest OS 768. In addition, operation 730 can copy the platform-independent application 774 that runs on the application server. As shown, the guest machine cloud image with guest OS 768 includes IP table rules and VPN configuration 770, and the public destination cloud includes flexible IP addresses from the cloud 762, cloud security groups 764, and flexible block storage. from cloud 766. In a cloud environment, operation 735 is performed to configure flexible IP addresses, resulting in a flexible IP address configuration of cloud 762. In this cloud environment, operation 740 is performed to configure security groups from cloud 764 and operation 745 is performed to configure the flexible block storage operation of cloud 766. Operation 750 is performed to configure VPNs (virtual private networks). The result of operation 750 is an update of the IP table and VPN configuration rules 770 that are executed on the copied image 768, which creates a virtual private network between the public destination cloud 760 and the private destination cloud 780. [0122] Figure 8 is a flow chart showing the steps taken to create a topology model according to an embodiment. Processing starts at 800 after which, in step 802, a middleware unit is selected, such as a Java virtual machine (for example, IBM WebSphere® sMash application, etc.). The middleware unit is, in general, platform dependent. Topology model 804 includes virtual appliance 806 in which the selected middleware execution environment 810 is located. The platform-independent application 808, such as a Java application, is also selected and associated with the middleware runtime environment 810. In step 805, a topology model unit is added to a base virtual image specific to the cloud. The machine image 812 is then included in the virtual application 806. In this example, the machine image 812 includes the guest operating system 814 (for example, the Linux operating system, etc.), server software 816, and the copy of the 818 cloud image. [0123] The metadata for existing virtual images compatible with a cloud can be found in one or more image libraries specific to the cloud for this cloud. This metadata may include a description of the software components preinstalled in the existing virtual images. The metadata on the virtual device unit and its units included in the target topology model can describe the prerequisite software components for the solution that can be found pre-installed in the base virtual image specific to the cloud. The metadata in the virtual application unit can be used to search the metadata in the image libraries to find a suitable base virtual image to implement the solution in the target cloud. If the virtual images identified as a search result do not include all the prerequisite software components or the correct versions of the prerequisite software components, then a closer match-based virtual image can be determined. Such closest match-based virtual image can then be enhanced as part of an implementation workflow by adding, updating, or removing software components in the virtual image. [0124] In step 820, in one embodiment, specific configuration settings for cloud 822 and 824 are added, such as flexible IP addresses, volume information, security group settings and so on. In step 826, the application server (middleware execution environment 810) is linked to application units (platform-independent application 808). In step 828, specific operating system settings are added and associated with guest operating system 814. [0125] These operating system-specific configuration settings may include HTTP settings, network settings, firewall settings, etc. In step 832, one or more virtual devices (external service units 834) are added to the target cloud that hosts prerequisite application services, such as database services and LDAP services, which are provided externally from cloud-based virtual application. In step 836, application communication links are configured between the application units (application 808) and the prerequisite external services that were added in step 832. In step 838, implementation order restrictions are specified between different model units. Step 838 allows for sequencing of the automation steps used to implement the solution. In step 840, topology model 804, including all topology model units and the specified order of implementation, is stored in resource library 500. In one embodiment, resource library 500 is managed by resource management software 850. In the predefined process 860, the stored topology model and the specified implementation steps are used to create an automation model that is also stored in the 850 resource library (see Figure 9 and corresponding text for processing details on creating the model automation). [0126] Figure 9 is a flowchart showing the steps taken to create automation step models according to an embodiment. Processing creates a number of 910 automation step models used to implement the solution for the target cloud. Processing starts at step 900 when, at step 905, an automation step model 915 is created. The automation step model 915 includes operation 920 to implement a specific cloud configuration that establishes security groups, etc. in the target cloud. [0127] In step 925, the automation step model 930 is created to install the application, including parameters for the application server (middleware execution environment) and the platform-independent software application. The automation step model 930 includes operation 935 used to install the platform-independent application (for example, Java sMash application, etc.) and operation 940 used to configure the middleware runtime environment. In step 945, automation step model 950 is created to copy an image to the target cloud. The automation step model 950 includes operation 955 used to copy a particular image onto the target cloud. In step 960, automation step templates 910 are stored in resource library 500 shown to be managed by resource manager application software 850. In predefined process 970, input parameter specifications are provided and stored with the destination topology in resource library (see Figure 10 and corresponding text for processing details). [0128] Figure 10 is a flow chart showing the steps taken to specify input parameters and store in the topology model according to an embodiment. In Figure 10, the topology model 804 introduced in Figure 8 is shown. In Figure 10, processing starts at 1000 after which, in step 1010, model implementation units are related to the components. For example, a particular compressed file resource in the resource library that contains the implementable platform-independent binaries is related to the application unit in the topology model. The 808 application includes the 1020 properties and parameters that, for example, have a specified compressed file ("zip" file) configured in it. In step 1030, some, but not all, configuration parameters are specified for the topology model units. For example, an HTTP port, zip file name and URL for the application could be specified. In step 1040, the partially specified pattern is shared in resource library 500 as a reusable resource. In the predefined process 1050, the copy (s) is / are completely specified (s) and implemented (s) for the target platform (see Figure 11 and corresponding text for processing details). [0129] Figure 11 is a flowchart that shows the steps taken to specify and fully implement a running copy of the cloud-based application according to one embodiment. The flowchart shown in Figure 11 also shows the implementation of multiple copies for "multi-tenancy" with minimal configuration changes during specification. Processing starts at 1100 after which, in step 1105, the parameters of the topology model units are completely specified and stored in the topology model 804. Some topology model units are associated with automation step models 910 that describe the operation used to implement the topology model units. In step 1110, an ordered sequence of implementation operations is generated and stored in the automation workflow model 1115. In one embodiment, the automation workflow model is a generic representation of the operations used to implement the model units of automation. topology. Implementation workflow 1125 is generated from the automation workflow model 1115 in step 1120. In one embodiment, implementation workflow 1125 is a non-generic description of operations that is in a format that can be performed by a particular 1135 implementation mechanism. In this way, step 1120 can be performed to provide different implementation workflows that work with different implementation mechanisms. [0130] In step 1130, the 1135 deployment engine runs the 1125 deployment workflow and creates one or more running copies (copies 1150 and 1155) of the cloud-based application that is running on one or more clouds of target 1140. In step 1160, the running copy is observed and tested to ensure that the cloud-based solution is working correctly. A determination is made as to whether changes to the parameters specified in the model units are necessary (decision 1170). [0131] If changes are needed, then decision 1170 branches to the "yes" branch when, in step 1175, the copy parameters are edited and processing returns to regenerate the workflow model, workflow implementation work and rerun the implementation workflow through the implementation engine. This cycle continues until there is no need for further changes, whereupon Decision 1170 branches to the "no" branch. It should be noted that it may not be necessary to regenerate the workflow model if such parameters are specified as input parameters that can be modified before re-implementation through user input. A determination is made as to whether multiple copies of the application (cloud-based solution) are being created in the target cloud (or target clouds). If multiple copies of the application are being created, then decision 1180 branches to the "yes" branch when, in step 1185, some workflow parameters are changed in order to create the next copy and processing returns to generate another workflow model and another implementation workflow and processing performs the new implementation workflow using the implementation mechanism. For example, it may be necessary to implement a new copy for a new tenant in a "multi-tenant" solution. In one embodiment, "multi-tenant" is the ability to share platforms (for example, clouds, hypervisors) between multiple clients. In another example, a new copy (s) of the solution may be required in order to satisfy different performance or security requirements for different workloads. This cycle continues until no more copies of the application are desired, at which point decision 1180 branches to the "no" branch and processing ends in 1195. [0132] Figure 12 is a flowchart showing the steps taken to reuse resources stored in the resource library and implement the solution in a target cloud environment using the resources reused according to one embodiment. Processing starts at 1200, after which, in step 1210, processing receives a request to transfer a solution to a target cloud or private hypervisor. In the predefined process 1220, an existing topology and topology model units closest to the request are found in the 500 resource library. The predefined process 1220 includes the replacement of specific model units for cloud and storage of the new topology and new model units in the resources 500. See Figure 13 and corresponding text for processing details in relation to the predefined process 1220. In step 1230, configuration parameters in the target topology model are fully specified and stored in the resource library 500. In the predefined process 1240, models automation step models corresponding to the replaced or added cloud-specific model units are found in resource library 500. Also in the predefined process 1240, automation step models corresponding to the identified cloud-specific topology model units are found within the library of resources 500 and automation step models are stored in resource library 500. [0133] Resource step models are used to implement the topology model units for the target cloud. See Figure 14 and corresponding text for processing details about the predefined process 1240. In the predefined process 1250, an implementation workflow is generated based on the automation step models (see Figure 15 and the corresponding text for processing details and see Figure 16 and corresponding text for details regarding the implementation of a composite solution). The deployment result is the source cloud 1260 with a copy 1265 of an existing cloud-based solution and target cloud 1270 with a new copy 1275 of the cloud-based solution. In one embodiment, both the source cloud and the destination cloud are accessible from computer network 200, as is the Internet. Thus, for example, each copy can provide a copy of a web-based application accessible by customers over the computer network, for example, using client-based web browsing software. [0134] Figure 13 is a flowchart that shows the steps taken to find existing topology units that match a request, replace cloud-specific model units, and store new and modified model units in the resource library according to one embodiment. Figure 13 is called by the predefined process 1220 shown in Figure 12. The processing shown in Figure 13 starts at 1300 after which, in step 1320, the requirements for a new cloud-based solution are received from the 1310 user. In step 1325, the metadata for existing topology models stored in resource library 500 is searched to find existing topology models that match, to some extent, the requirements provided by the user. In one embodiment, the previously calculated differences can be retrieved from the resource library 500 as a patch and can be applied to an existing source topology model to create a target topology model. A determination is made as to whether all existing topology models that match the user's requirements have been found in the resource library (decision 1330). If no topology model currently exists in the resource library that corresponds to the user's needs, then decision 1330 branches to the "no" branch when, in the predefined process 1335, a new topology model is created (see, for example, example, Figure 8 and corresponding text for an example of creating a new topology model). [0135] On the other hand, if one or more topology models were found that correspond to the user's requirements, then decision 1330 branches to the “yes” branch when, in step 1340, the existing topology model found in resource library that most closely matches the user's needs is copied. In step 1350, the new topology model is stored in the resource library (either a newly created topology model created in the predefined process 1335 or an existing topology model copied in step 1340). [0136] A determination is made as to whether model topology units require modification (decision 1360). For example, if a topology model was copied in step 1340, the new target topology model may require modification if the copied topology model does not exactly match the requirements specified by the user. If one or more model topology units require modification, then decision 1360 branches to the "yes" branch when, in step 1370, model topology units requiring modification are retrieved from the target topology model and modified to meet user needs. In step 1380, the modified topology model units are stored in the target topology model in resource library 500. Returning to decision 1360, if the topology model units do not require modification, then decision 1360 branches to the " no "skipping steps 1370 and 1380. In step 1390, the cloud-specific model units for the target cloud are replaced. Processing then returns to the calling routine (see Figure 12) in 1395. [0137] Figure 14 is a flowchart showing the steps taken to generate an implementation workflow model according to an embodiment. Figure 14 is called by the predefined process 1240 shown in Figure 12. The processing shown in Figure 14 starts at 1400 after which, in step 1410, the first new or modified topology model unit used to implement the solution on the target cloud is identified. Note that the source topology model units that have not been modified do not need to be identified because the automation step model already associated with the unmodified topology model unit can be used. Note that if any model units are removed from the source topology model, then the corresponding deployment operation can also be removed from the target workflow model. Also note that, in one embodiment, several topology model units can be associated with an automation step model (Automation Step Model- ASM). If multiple topology model units are associated with an automation step model, then a check can be made to see if all units are present in the target topology model. In step 1420, resource library 500 is searched for automation step models associated with the target cloud. [0138] A determination is made as to whether all the corresponding automation step models have been found in the resource library (decision 1430). If no corresponding automation step model was found, then decision 1430 branches to the "no" branch when, in step 1450, a new automation step model is created for the target cloud. On the other hand, if a corresponding automation step model was found, then decision 1430 branches to the "yes" branch, after which the found automation step model is used. In step 1450, the automation step model (either found by searching or created in step 1440) is associated with the identified new or modified topology model unit. A determination is made as to whether there are more new or modified model topology units to process (decision 1460). If there are more new or modified topology model units to process, then decision 1460 branches to the "yes" branch which returns to identify the next new or modified topology model unit and associate it with an automation step model , as described above. Note that for modified model units, the name Automation Step Model used in the source cloud can be used in the search to find similar Automation Step Models for the target cloud. This cycle continues until there are no new or modified topology model units to process, when decision 1460 branches to the "no" branch. [0139] In step 1470, the 1480 implementation workflow model is generated for the target cloud. The workflow model is generated using the found or newly created automation step models associated with the identified new or modified topology model units, as well as automation step models already associated with the topology model units in the topology model. source that have not been modified in order to transfer the solution to the target cloud. Processing then returns to the calling routine (see Figure 12) in 1395. [0140] Figure 15 is a flowchart showing the steps taken to generate an implementation workflow from the model and implement using an implementation mechanism according to an embodiment. Figure 15 is called by the predefined process 1250 shown in Figure 12. The processing shown in Figure 15 starts at 1500 after which, in step 1510, an implementation mechanism that will be used to implement the solution for the target cloud is selected from the data memory 1515 implementation mechanisms. Some target clouds may use a particular implementation mechanism, although other general purpose implementation mechanisms can also be used to implement solutions for target clouds. Implementation mechanisms can have different processing capabilities and characteristics that can make a given implementation mechanism attractive for implementing a solution for a given target cloud. In step 1520, the first implementation operation (s) is / are selected from the implementation workflow model 1480 that was generated in Figure 14. [0141] Each automation step model can include several implementation operations. These implementation operations are ordered sequentially in the workflow model. Implementation operations are also specific to the implementation mechanism. Each implementation operation is then used to generate a specific step for the implementation mechanism. Step 1520 generates one or more steps specific to the 1515 implementation mechanism that are capable of being performed by the chosen implementation mechanism. The specific implementation engine steps generated are stored in the implementation workflow 1530 as the first implementation step 1531, the second implementation step 1532, etc., until the last implementation step 1534. A determination is made as to whether there are more implementation operations to be processed (decision 1540). If there are more step models to process, then decision 1540 branches to the "yes" branch that returns to select the next automation step model from the 1480 implementation workflow model and generate specific steps for the implementation, as described above in step 1520. This cycle continues until there are no more implementation operations to process, at which point the 1540 decision branches to the "no" branch, when the selected 1550 implementation mechanism is called to process the implementation workflow 1530. Implementation workflow 1530 includes implementation operations and is passed to the selected implementation engine 1550 before calling. [0142] Processing of the implementation engine begins by executing the first implementation step (step 1531) included in the implementation workflow 1530. The execution of a first implementation step results in the implementation of a part of the solution on the target platform (target cloud 1270). [0143] A determination is made by one of the 1515 implementation mechanisms to find out if there are more implementation steps to process (decision 1570). If there are more implementation steps to process, then decision 1570 branches to the "yes" branch that returns to select and execute the next step (for example, second implementation step 1532) from the 1530 implementation workflow, resulting in the implementation of the solution on the target platform. This cycle continues until the last stage of implementation (last stage of implementation 1534) is processed in step 1560, at which point Decision 1570 branches to the "no" branch and processing returns in 1595. [0144] The result of executing all the implementation steps is the new 1275 cloud-based solution running on the target platform (target cloud 1270). [0145] Figure 16 is a flowchart showing the steps taken to generate an implementation workflow from the model and to implement a composite solution to various cloud-based environments according to one embodiment. The steps are the same as those shown and described in Figure 15, however, in Figure 16, the implementation steps result in the solution being implemented across two target platforms (first target cloud 1610 and second target cloud 1630), resulting in a composite 1600 solution. Each target cloud hosts a virtual part of the solution (virtual part 1620 hosted by the first target cloud 1610 and virtual part 1640 hosted by the second target cloud 1630). In addition, one or more of the implementation steps included in the 1530 implementation workflow establish communication links 1630 between the virtual part 1650 and the virtual part 1640. The communication link 1650 can be established through a virtual private network (VPN) ). Although two clouds and virtual parts are shown in the composite solution 1600, any number of target clouds and virtual parts can be included in a composite solution with communication links established between any number of the virtual parts. [0146] In one embodiment, a solution for the target cloud or hypervisor can be rebuilt using a model-controlled approach that can avoid: i) copying image content; and ii) representation of virtual image contents in a unified disk file format. Achievements may allow a solution to be transferred between different cloud providers (or hypervisor) with incompatible hardware architecture, hypervisor technology, type and version of guest operating system. Embodiments can also allow cloud-specific (or hypervisor-specific) configurations to be added when transporting. The embodiments may also allow the inclusion of parts of the virtual image in a composite solution for hybrid clouds that can be partially transferred. [0147] In one embodiment, the differentiation of a source topology model associated with a source platform and a target topology model associated with a target platform that results in a topology difference can be obtained and / or stored in patches using a tool such as Eclipse Modeling Framework (EMF) Compare Project. In one embodiment, a part of the differentiation is performed by at least one processor that can be selected from one or more processors. In one embodiment, model topology units can be built and / or viewed using Rational Software Architectno Eclipse. In one embodiment, automation step models can be built and / or viewed using Rational Software Architectno Eclipse. Rational Software Architect stores model data in XML format. XML includes different sections for the different model units, such as the virtual application, the middleware, the virtual image, the guest operating system, specific cloud configuration, application-level communication links and so on. Each XML section can include several implementation parameters, such as software versions and types. The parameters in the virtual application section can be used as input to searches in the resource library to find a compatible virtual image for the target cloud. [0148] In one embodiment, obtaining an operation on a workflow model from a resource library can be done by searching Rational Asset Manager where automation step models that include implementation operations can be stored. The search can use metadata input for the topology model unit associated with the automation step model. In one embodiment, a portion of the resource library can be stored on a persistent storage medium. In one embodiment, the entire resource library, including a portion of the resource library, can be stored on a persistent storage medium. [0149] In one embodiment, performing the operation to implement a part of a solution, where the implemented part of the solution includes a target image compatible with the target platform, can be done using Tivoli Provisioning Manager as an implementation mechanism . In one embodiment, Tivoli Provisioning Manager can run a workflow that deploys different parts of the solution to different clouds or hypervisors. [0150] Embodiments of methods, computer program products and systems for transferring a solution from a source platform to a target platform are described. A difference is determined between a set of model units in a source topology model and a set of model units in a target topology model. The source topology model is associated with a source platform and the target topology model is associated with a target platform. An operation of a workflow model is obtained from a resource library by virtue of its association with the calculated difference between the source topology model set and the target topology model set. The operation is transmitted. The operation is configured to implement at least part of a solution that comprises a target image compatible with the target platform. Such embodiments can be used to transfer solutions between clouds of different infrastructures or hypervisors that support different hardware architectures, virtual image formats and programming interfaces. Such embodiments can also be used to reuse solution components, configuration parameters, and common deployment automation operations when transferring solutions. [0151] According to other described embodiments, the source platform is a first set of hardware and software resources and the destination platform is a second set of hardware and software resources. At least part of the solution is transferred from the first set of hardware and software resources to the second set of hardware and software resources. Such embodiments can be used to transfer a solution from one cloud (or hypervisor or computer system) to another cloud (or computer system or hypervisor). [0152] According to other described embodiments, the source platform is a private set of hardware and software resources. The target platform is a public set of hardware and software resources. Such embodiments can be used to transfer a solution from a private cloud to a public cloud. Other embodiments can be used to transfer a solution from a public cloud to a private cloud, from a private cloud to a private cloud and / or from a public cloud to a public cloud. [0153] According to other described embodiments, the solution is a composite solution. The second set of hardware and software resources comprises a plurality of sets of hardware and software resources. Such embodiments can be used to transfer different virtual parts of a solution to different clouds (or hypervisors or computer systems) that comprise a hybrid cloud. [0154] According to other described embodiments, metadata stored in a resource library is searched for at least one base image metadata that is associated with the target platform. Such embodiments can be used to find compatible base virtual images for the target platform on which the solution's pre-required software components are pre-installed. [0155] According to other described embodiments, the source platform is a first hypervisor that runs on a first set of one or more computer systems. The target platform is a second hypervisor that runs on a second set of one or more computer systems. The first and second hypervisors are different types of hypervisors. Such embodiments can be used to transfer a solution from one hypervisor (or computer system) to another hypervisor (system or computer). [0156] According to other described embodiments, the determined difference comprises at least one of a new model unit, a modified model unit or a model unit removed. Such embodiments can be used for the reuse of solution components, configuration parameters and common implementation automation operations when transferring solutions. [0157] According to other described embodiments, the determined difference further comprises identifying one or more attributes of a set of model units in the source topology model and identifying whether the identified attributes are incompatible with one or more identified attributes of the set of units models in the target topology. The determined difference may comprise the identification of incompatible attributes (including type) in model units of the source topology model compared to the target topology model. Such embodiments can be used to identify the solution components, configuration parameters and implementation automation operations that need to be modified when transferring solutions. [0158] According to other described embodiments, the identified incompatible attribute of the model unit is analyzed in response to the identification that the identified attributes are incompatible. The incompatible attribute of the model unit is modified in order to transfer the solution from the source platform to the destination platform. Such embodiments can be used to determine changes in solution components, configuration parameters and implementation automation operations for transferring solutions; and can also be used to make incompatible attributes identified on model units compatible with the topology model of the target platform. [0159] According to other described embodiments, the incompatible attribute identified identifies whether a model unit has been removed, added or modified in the target topology when compared to the source topology model. Such embodiments can be used to identify the solution components, configuration parameters and implementation automation operations that need to be modified when transferring solutions. [0160] According to other described embodiments, the modification of incompatible attributes also includes adding a new model unit, updating the model unit or removing the model unit in order to correct the incompatibility identified between the set of model units in the source topology and the set of model units for the target topology. Such embodiments can be used to determine changes in solution components, configuration parameters and implementation automation operations for transferring solutions. [0161] According to other described embodiments, a model unit comprises data that identifies one or more attributes of a topology model. Such embodiments can be used to determine the configuration and implementation parameters for implementing a solution on a platform. [0162] According to other described embodiments, the source platform is a first hypervisor that runs on a first set of hardware resources and software resources. The target platform is a second hypervisor that runs on a second set of hardware and software resources. Source and destination hypervisors are of different types. Such embodiments can be used to transfer a solution between virtual images compatible with different types of hypervisors. [0163] Embodiments of methods, computer program products and systems are provided that obtain a model topology unit that has to be implemented on a target platform. A plurality of automation step models stored in a resource library is searched for a selected automation step model that is associated with the received topology model unit. The search is performed by one or more processors. One or more deployment operations are obtained from the resource library. The implementation operations obtained are associated with the selected automation step model. The implementation operations obtained are performed in order to implement the topology model unit for the target platform. Such embodiments can be used to build a new or modified workflow model for implementing a solution on a different platform. [0164] Embodiments of methods, computer program products and systems that retrieve source image metadata from persistent storage medium are provided. The source image metadata corresponds to a source image associated with a source platform. The recovered source metadata is compared to one or more available image metadata that corresponds to one or more available images associated with a target platform. One of the available image metadata that is most compatible with the source image metadata is identified based on the comparison. The available image that corresponds to the identified available image metadata is used as a target image compatible with the target platform. Such embodiments can be used to find compatible base virtual images for the target platform where most (if not all) of the solution's pre-required software components are pre-installed. [0165] It must be understood that there are several alternative embodiments. For example, in one embodiment, the invention provides a computer-readable / usable medium that includes computer program code to allow a computer infrastructure to provide functionality as discussed here. Up to this point, the computer-readable / usable medium includes a program code that implements each of the various processes. It should be understood that the terms computer-readable or computer-usable means comprise one or more of any type of physical embodiment of the program code. In particular, the computer-readable / usable medium may comprise the program code embedded in one or more portable storage manufacturing articles (for example, a compact disk, a magnetic disk, a tape, etc.), on one or more parts of data memory from a computing device, such as memory 28 (Figure 1) and / or storage system 34 (Figure 1) (for example, a fixed disk, a read-only memory, a random access memory , a cache memory, etc.) and / or as signal data (for example, a propagated signal) that travels over a network (for example, during a wired / wireless electronic distribution of the program code). [0166] In one embodiment, a method that performs the process when subscription, advertising and / or fee-based is provided. That is, a service provider, such as a Solution Integrator, could offer the services described here. In this case, the service provider can create, maintain, support, etc., a computer infrastructure, such as computer system 12 (Figure 1) that performs the process for one or more customers. In return, the service provider may receive payment from the customer (s) under a subscription and / or fee agreement and / or the service provider may receive payment for the sale of advertising content to one or more third parties . [0167] In one embodiment, a computer-implemented method is provided to provide the functionality described here. In this case, a computer infrastructure, such as computer system 12 (Figure 1), can be provided and one or more systems for carrying out the process (for example, created, purchased, used, modified, etc.) and implement the computer infrastructure. Up to this point, the implementation of a system may comprise one or more of: (1) installing a program code on a computing device, such as computer system 12 (Figure 1), from a computer-readable medium ; (2) adding one or more computing devices to the computer infrastructure; and (3) incorporating and / or modifying one or more existing computer infrastructure systems to allow the computer infrastructure to carry out the process. [0168] One of the described implementations is a software application, that is, a set of instructions (program code) or other computer program instructions in a code module that may, for example, be resident in random access memory the computer. Descriptive functional material includes "program code", "computer program code", "computer instructions" and any expression, in any language, code or notation, of a set of instructions intended to cause a device to computing has an information processing capability to perform a particular function, directly or after any or both of the following: (a) conversion to another language, code or notation; and / or (b) reproduction in a different material form. Up to this point, the program code can be incorporated as one or more of the following: an application program / software, component software / a library of functions, an operating system, a basic device system / driver for a particular computer device and so on. Until required by the computer, the instruction set can be stored in other computer memory, for example, on a hard disk drive, or on removable memory, such as an optical disk (for eventual use on a CD-ROM) or floppy disk (for eventual use in a floppy drive). Thus, the embodiments can be implemented as a computer program product for use on a computer. In addition, although several methods described are conveniently implemented on a general-purpose computer selectively activated or reconfigured by software, those skilled in the art will also recognize that such methods can be performed on hardware, firmware or on a more specialized device built to perform the necessary steps of the method. Descriptive functional material is the information that gives functionality to a machine. The descriptive functional material includes, but is not limited to, computer programs, instructions, rules, facts, definitions of computable functions, objects and data structures. [0169] An information processing system (data processing system) suitable for storing and / or executing program code can be provided here below and may include at least one processor coupled in communication, directly or indirectly, to the ) memory element (s) over a system bus. The memory elements may include, but are not limited to, local memory used during actual execution of the program code, common memory and cache memories that allow temporary storage of at least some program code in order to reduce the number of times the code must be retrieved from memory during execution. Input devices and / or output devices (including, but not limited to, keyboards, monitors, pointer devices, etc.) can be coupled to the system, either directly or through intervening device drivers. [0170] Network adapters can also be coupled to the system to allow the data processing system to become coupled with other data processing systems, remote printers, storage devices and / or the like through any combination of private networks or public actors. Illustrative network adapters include, but are not limited to, modems, cable modems, and Ethernet cards. [0171] The previous description has been presented for purposes of illustration and description. It is not intended to be exhaustive or limiting since, obviously, many modifications and variations are possible. Such modifications and variations that may be evident to those skilled in the art and should be included within the scope of the description, as defined by the appended claims. [0172] Although particular embodiments have been shown and described, it will be obvious to those skilled in the art, based on the teachings described here, that changes and modifications can be made without departing from the present description and its broader aspects. [0173] Furthermore, it should be understood that one or more embodiments are defined in the attached claims. It will be understood by those skilled in the art that, if a specific number of an introduced claimed element is intended, that intention will be expressly stated in the claim and, in the absence of such a quote, no limitations are present. As a non-limiting example, as an aid to understanding, the following appended claims contain the use of introductory phrases "at least one" and "one or more" to introduce claimed elements. [0174] However, the use of such phrases should not be interpreted as implying that the introduction of an element claimed by the indefinite articles "one" or "one" limits any particular claim that contains such a claimed element introduced to limitations that contain only one such elements, even when such a claim includes the introductory phrases "one or more" or "at least one" and indefinite articles, such as "one" or "one"; the same goes for use in claims of defined articles.
权利要求:
Claims (10) [0001] 1. Method implemented by an information processing system characterized by the fact that it comprises: receiving a request for a solution on a target platform, in which the target platform is associated with a target topology model, and in which the request includes the target platform and one or more requirements for the solution; compare, by a processor, target image metadata corresponding to one or more requirements for one or more available virtual images associated with a plurality of platforms; identify, based on the comparison, one of the available image metadata that is more compatible with the metadata of the source image; select the available virtual image corresponding to the available image metadata identified as a target virtual image compatible with the target platform; modify one or more topology model units associated with the target virtual image; store the modified topology model units in an asset library of the target topology model; and generate a delivery workflow model for the target platform, where the delivery workflow model comprises one or more automation step models corresponding to the modified topology model units and one or more automation step models automation corresponding to one or more unchanged topology model units. [0002] 2. Method according to claim 1, characterized by the fact that the target image metadata and the available image metadata include software component metadata. [0003] 3. Method, according to claim 1, characterized by the fact that it also comprises: enhancing the available virtual image, in which the enhancement comprises adding one or more software components to the available virtual image. [0004] 4. Method, according to claim 1, characterized by the fact that it also comprises: implementing the solution on the target platform. [0005] 5. Method, according to claim 4, characterized by the fact that implementing additionally comprises: selecting an implementation mechanism; generate one or more delivery mechanism steps for each automation step model in the delivery workflow model; and perform the delivery mechanism steps. [0006] 6. Information processing system characterized by the fact that it comprises: one or more processors; a memory accessible by at least one of the processors; a persistent storage medium accessible by at least one of the processors; a network interface that connects the information processing system to a computer network, where the network interfaces are accessible by at least one of the processors; and a set of instructions stored in memory and executed by at least one of the processors to perform the actions of: receiving a request for a solution on a target platform, where the target platform is associated with a target topology model, and where the request includes the target platform and one or more requirements for the solution; comparing source image metadata corresponding to one or more requirements for one or more available image metadata corresponding to one or more available virtual images associated with a plurality of platforms; identify, based on the comparison, one of the available image metadata that is more compatible with the source image metadata; and selecting the available virtual image corresponding to the available image metadata as a target virtual image compatible with the target platform; modify one or more topology model units associated with the target virtual image; store the modified topology model units in an asset library of the target topology model; and generate a delivery workflow model for the target platform, where the delivery workflow model comprises one or more automation step models corresponding to the modified topology model units and one or more automation step models automation corresponding to one or more model units of unchanged topology. [0007] 7. Information processing system, according to claim 4, characterized by the fact that the target image metadata and the available image metadata include software component metadata. [0008] 8. Information processing system, according to claim 4, characterized by the fact that the actions additionally include: improving the available virtual image, in which the improvement includes adding one or more software components to the available virtual image. [0009] 9. Information processing system, according to claim 4, characterized by the fact that the actions additionally comprise: implementing the solution on the target platform. [0010] 10. Information processing system, according to claim 9, characterized by the fact that the actions additionally include: selecting an implementation mechanism; generate one or more delivery mechanism steps for each automation step model in the delivery workflow model; and perform the delivery mechanism steps.
类似技术:
公开号 | 公开日 | 专利标题 BR112012018768B1|2020-12-22|method implemented by an information processing system and information processing system US9405529B2|2016-08-02|Designing and cross-configuring software US9195453B1|2015-11-24|Remediation of known defects and vulnerabilities in cloud application packages US9047160B2|2015-06-02|Designing and building virtual images using semantically rich composable software image bundles US8327350B2|2012-12-04|Virtual resource templates US9298482B2|2016-03-29|Plug-in based templatization framework for automating the creation of open virtualization format virtual appliances US8799893B2|2014-08-05|Method, system and computer program product for solution replication US11023267B2|2021-06-01|Composite virtual machine template for virtualized computing environment US20120151198A1|2012-06-14|System and Method for Instantiation of Distributed Applications from Disk Snapshots US10031762B2|2018-07-24|Pluggable cloud enablement boot device and method US9361120B2|2016-06-07|Pluggable cloud enablement boot device and method that determines hardware resources via firmware Cochrane et al.2018|Docker Cookbook: Over 100 practical and insightful recipes to build distributed applications with Docker Krochmalski2016|Developing with Docker Rath2017|Integration of LXD System Containers with OpenStack, CHEF and its Application on a 3-Tier IoT Architecture
同族专利:
公开号 | 公开日 US20150106396A1|2015-04-16| KR20120113716A|2012-10-15| US20110161952A1|2011-06-30| US8990794B2|2015-03-24| US10528617B2|2020-01-07| DE112010004160T5|2012-09-20| WO2011080063A1|2011-07-07| CN102725733B|2015-04-29| BR112012018768A2|2018-06-05| JP5669861B2|2015-02-18| KR101442360B1|2014-09-17| US20120180035A1|2012-07-12| CA2781496C|2020-09-15| JP2013516668A|2013-05-13| CN102725733A|2012-10-10| CA2781496A1|2011-07-07| US8984503B2|2015-03-17|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US5848415A|1996-12-18|1998-12-08|Unisys Corporation|Selective multiple protocol transport and dynamic format conversion in a multi-user network| US6058397A|1997-04-08|2000-05-02|Mitsubishi Electric Information Technology Center America, Inc.|3D virtual environment creation management and delivery system| US7227837B1|1998-04-30|2007-06-05|At&T Labs, Inc.|Fault tolerant virtual tandem switch| US6792615B1|1999-05-19|2004-09-14|New Horizons Telecasting, Inc.|Encapsulated, streaming media automation and distribution system| JP2001051834A|1999-08-04|2001-02-23|Hitachi Ltd|Method and system for dynamic application start on workflow system| US6714980B1|2000-02-11|2004-03-30|Terraspring, Inc.|Backup and restore of data associated with a host in a dynamically changing virtual server farm without involvement of a server that uses an associated storage device| JP4688270B2|1999-10-13|2011-05-25|株式会社ビジュアルジャパン|Network type data transmission system, and server and terminal device in the system| US7006494B1|2000-01-04|2006-02-28|Cisco Technology, Inc.|System and method for a virtual telephony intermediary| US7209921B2|2000-09-01|2007-04-24|Op40, Inc.|Method and system for deploying an asset over a multi-tiered network| US7002976B2|2000-11-01|2006-02-21|Marconi Intellectual Property Inc.|Virtual ethernet ports with automated router port extension| US6954790B2|2000-12-05|2005-10-11|Interactive People Unplugged Ab|Network-based mobile workgroup system| US7080159B2|2000-12-15|2006-07-18|Ntt Docomo, Inc.|Method and system for effecting migration of application among heterogeneous devices| US7283533B1|2001-06-25|2007-10-16|Cisco Technology, Inc.|Interworking of packet-based voice technologies using virtual TDM trunks| US6785744B2|2001-08-08|2004-08-31|International Business Machines Corporation|Mapping SCSI medium changer commands to mainframe-compatible perform library function commands| US6760804B1|2001-09-11|2004-07-06|3Com Corporation|Apparatus and method for providing an interface between legacy applications and a wireless communication network| US7184789B2|2001-10-03|2007-02-27|Qualcomm, Incorporated|Method and apparatus for data packet transport in a wireless communication system using an internet protocol| US7577722B1|2002-04-05|2009-08-18|Vmware, Inc.|Provisioning of computer systems using virtual machines| US7080378B1|2002-05-17|2006-07-18|Storage Technology Corporation|Workload balancing using dynamically allocated virtual servers| US20040039815A1|2002-08-20|2004-02-26|Compaq Information Technologies Group, L.P.|Dynamic provisioning system for a network of computers| US20040059829A1|2002-09-24|2004-03-25|Chu Thomas P.|Methods and devices for converting routing data from one protocol to another in a virtual private network| US7092958B2|2003-01-29|2006-08-15|Battelle Energy Alliance, Llc|Knowledge information management toolkit and method| US8209680B1|2003-04-11|2012-06-26|Vmware, Inc.|System and method for disk imaging on diverse computers| US7633955B1|2004-02-13|2009-12-15|Habanero Holdings, Inc.|SCSI transport for fabric-backplane enterprise servers| GB0405792D0|2004-03-15|2004-04-21|Univ Louvain|Augmented reality vision system and method| US20060101116A1|2004-10-28|2006-05-11|Danny Rittman|Multifunctional telephone, walkie talkie, instant messenger, video-phone computer, based on WiFi and WiMax technology, for establishing global wireless communication, network and video conferencing via the internet| US8051148B2|2005-01-13|2011-11-01|National Instruments Corporation|Determining differences between configuration diagrams| US7454407B2|2005-06-10|2008-11-18|Microsoft Corporation|Techniques for estimating progress of database queries| US7914519B2|2005-06-23|2011-03-29|Elcam Medical Agricultural Cooperative Association, Ltd.|Catheter device| US7440894B2|2005-08-09|2008-10-21|International Business Machines Corporation|Method and system for creation of voice training profiles with multiple methods with uniform server mechanism using heterogeneous devices| US20070050525A1|2005-08-25|2007-03-01|Moxa Technologies Co., Ltd.|[virtual com port for remote i/o controller]| US8078728B1|2006-03-31|2011-12-13|Quest Software, Inc.|Capacity pooling for application reservation and delivery| US7587570B2|2006-05-31|2009-09-08|International Business Machines Corporation|System and method for providing automated storage provisioning| US20080080526A1|2006-09-28|2008-04-03|Microsoft Corporation|Migrating data to new cloud| US7865663B1|2007-02-16|2011-01-04|Vmware, Inc.|SCSI protocol emulation for virtual storage device stored on NAS device| US8205195B2|2007-06-08|2012-06-19|Sap Ag|Method and system for automatically classifying and installing patches on systems| KR100955426B1|2007-08-10|2010-05-04|지란지교소프트|Method for executing virtual platform| US7383327B1|2007-10-11|2008-06-03|Swsoft Holdings, Ltd.|Management of virtual and physical servers using graphic control panels| US8386918B2|2007-12-06|2013-02-26|International Business Machines Corporation|Rendering of real world objects and interactions into a virtual universe| CN101946260A|2007-12-20|2011-01-12|惠普开发有限公司|Modelling computer based business process for customisation and delivery| US8175863B1|2008-02-13|2012-05-08|Quest Software, Inc.|Systems and methods for analyzing performance of virtual environments| US8127086B2|2008-06-06|2012-02-28|International Business Machines Corporation|Transparent hypervisor pinning of critical memory areas in a shared memory partition data processing system| US8230500B1|2008-06-27|2012-07-24|Symantec Corporation|Methods and systems for detecting rootkits| US20100043046A1|2008-07-07|2010-02-18|Shondip Sen|Internet video receiver| US8250215B2|2008-08-12|2012-08-21|Sap Ag|Method and system for intelligently leveraging cloud computing resources| US8060714B1|2008-09-26|2011-11-15|Emc B.V., S.A.R.L.|Initializing volumes in a replication system| US8271536B2|2008-11-14|2012-09-18|Microsoft Corporation|Multi-tenancy using suite of authorization manager components| CN101430649B|2008-11-19|2011-09-14|北京航空航天大学|Virtual computation environmental system based on virtual machine| US8572587B2|2009-02-27|2013-10-29|Red Hat, Inc.|Systems and methods for providing a library of virtual images in a software provisioning environment| US8856294B2|2009-06-01|2014-10-07|Oracle International Corporation|System and method for converting a Java application into a virtual server image for cloud deployment| US8281307B2|2009-06-01|2012-10-02|International Business Machines Corporation|Virtual solution composition and deployment system and method| US20110016214A1|2009-07-15|2011-01-20|Cluster Resources, Inc.|System and method of brokering cloud computing resources| US8886708B2|2009-12-02|2014-11-11|Vmware, Inc.|Centralized computer network virtualization environment| US9002838B2|2009-12-17|2015-04-07|Wausau Financial Systems, Inc.|Distributed capture system for use with a legacy enterprise content management system| US8984503B2|2009-12-31|2015-03-17|International Business Machines Corporation|Porting virtual images between platforms| US8140735B2|2010-02-17|2012-03-20|Novell, Inc.|Techniques for dynamic disk personalization| US8352415B2|2010-06-15|2013-01-08|International Business Machines Corporation|Converting images in virtual environments|US5461252A|1992-10-06|1995-10-24|Matsushita Electric Industrial Co., Ltd.|Semiconductor device comprising an over-temperature detection element for detecting excessive temperature of amplifiers| US20120079473A1|2009-06-08|2012-03-29|Haruhito Watanabe|Software updating system, displaying unit and software updating method| US20110126197A1|2009-11-25|2011-05-26|Novell, Inc.|System and method for controlling cloud and virtualized data centers in an intelligent workload management system| US9317267B2|2009-12-15|2016-04-19|International Business Machines Corporation|Deployment and deployment planning as a service| US8984503B2|2009-12-31|2015-03-17|International Business Machines Corporation|Porting virtual images between platforms| US9021046B2|2010-01-15|2015-04-28|Joyent, Inc|Provisioning server resources in a cloud resource| US9443078B2|2010-04-20|2016-09-13|International Business Machines Corporation|Secure access to a virtual machine| US8671222B2|2010-05-11|2014-03-11|Smartshift Gmbh|Systems and methods for dynamically deploying an application transformation tool over a network| US9436459B2|2010-05-28|2016-09-06|Red Hat, Inc.|Generating cross-mapping of vendor software in a cloud computing environment| US9009663B2|2010-06-01|2015-04-14|Red Hat, Inc.|Cartridge-based package management| CA2804864C|2010-07-09|2018-11-20|State Street Corporation|Systems and methods for private cloud computing| US10235439B2|2010-07-09|2019-03-19|State Street Corporation|Systems and methods for data warehousing in private cloud environment| US10671628B2|2010-07-09|2020-06-02|State Street Bank And Trust Company|Systems and methods for data warehousing| US20120054626A1|2010-08-30|2012-03-01|Jens Odenheimer|Service level agreements-based cloud provisioning| US8856770B2|2010-09-17|2014-10-07|Sap Ag|Solution packages including segments of a process chain| US8819672B2|2010-09-20|2014-08-26|International Business Machines Corporation|Multi-image migration system and method| US8380661B2|2010-10-05|2013-02-19|Accenture Global Services Limited|Data migration using communications and collaboration platform| US8800055B2|2010-10-20|2014-08-05|International Business Machines Corporation|Node controller for an endpoint in a cloud computing environment| US9128742B1|2010-11-19|2015-09-08|Symantec Corporation|Systems and methods for enhancing virtual machine backup image data| US8875122B2|2010-12-30|2014-10-28|Sap Se|Tenant move upgrade| US9009105B2|2010-12-30|2015-04-14|Sap Se|Application exits for consistent tenant lifecycle management procedures| US8555276B2|2011-03-11|2013-10-08|Joyent, Inc.|Systems and methods for transparently optimizing workloads| US20120233315A1|2011-03-11|2012-09-13|Hoffman Jason A|Systems and methods for sizing resources in a cloud-based environment| US8732267B2|2011-03-15|2014-05-20|Cisco Technology, Inc.|Placement of a cloud service using network topology and infrastructure performance| US8909762B2|2011-03-17|2014-12-09|Hewlett-Packard Development Company, L.P.|Network system management| US8984104B2|2011-05-31|2015-03-17|Red Hat, Inc.|Self-moving operating system installation in cloud-based network| US20130007726A1|2011-06-30|2013-01-03|Indrajit Poddar|Virtual machine disk image installation| CN102325043B|2011-07-20|2014-09-03|华为技术有限公司|Topology generation method, device and system| US8732693B2|2011-08-04|2014-05-20|Microsoft Corporation|Managing continuous software deployment| US8943220B2|2011-08-04|2015-01-27|Microsoft Corporation|Continuous deployment of applications| US9038055B2|2011-08-05|2015-05-19|Microsoft Technology Licensing, Llc|Using virtual machines to manage software builds| US8838764B1|2011-09-13|2014-09-16|Amazon Technologies, Inc.|Hosted network management| US8793379B2|2011-11-01|2014-07-29|Lsi Corporation|System or method to automatically provision a storage volume by having an app-aware based appliance in a storage cloud environment| US9880868B2|2011-11-30|2018-01-30|Red Hat, Inc.|Modifying an OS installer to allow for hypervisor-specific adjustment of an OS| TWI515658B|2011-12-07|2016-01-01|萬國商業機器公司|Method and system for creating a virtual appliance| KR101342592B1|2011-12-23|2013-12-17|주식회사 케이티|Web Application Firewall Apparatus and method for Cloud system| US8782224B2|2011-12-29|2014-07-15|Joyent, Inc.|Systems and methods for time-based dynamic allocation of resource management| US8547379B2|2011-12-29|2013-10-01|Joyent, Inc.|Systems, methods, and media for generating multidimensional heat maps| TWI462017B|2012-02-24|2014-11-21|Wistron Corp|Server deployment system and method for updating data| US8789164B2|2012-03-16|2014-07-22|International Business Machines Corporation|Scalable virtual appliance cloudand devices usable in an SVAC| WO2013138979A1|2012-03-19|2013-09-26|Empire Technology Development Llc|Hybrid multi-tenancy cloud platform| US9071613B2|2012-04-06|2015-06-30|International Business Machines Corporation|Dynamic allocation of workload deployment units across a plurality of clouds| US9086929B2|2012-04-06|2015-07-21|International Business Machines Corporation|Dynamic allocation of a workload across a plurality of clouds| GB2501287A|2012-04-18|2013-10-23|Ibm|Installing applications at selected runtime instances| US9286103B2|2012-04-21|2016-03-15|International Business Machines Corporation|Method and apparatus for providing a test network as an IP accessible cloud service| US9237188B1|2012-05-21|2016-01-12|Amazon Technologies, Inc.|Virtual machine based content processing| US8924969B2|2012-06-07|2014-12-30|Microsoft Corporation|Virtual machine image write leasing| TW201401074A|2012-06-26|2014-01-01|Quanta Comp Inc|Mechanism and system for deploying software over clouds| US10338940B2|2012-06-27|2019-07-02|International Business Machines Corporation|Adjusting adminstrative access based on workload migration| US8856382B2|2012-07-30|2014-10-07|International Business Machines Corporation|On-boarding services to a cloud environment| US9836548B2|2012-08-31|2017-12-05|Blackberry Limited|Migration of tags across entities in management of personal electronically encoded items| US9286051B2|2012-10-05|2016-03-15|International Business Machines Corporation|Dynamic protection of one or more deployed copies of a master operating system image| US9311070B2|2012-10-05|2016-04-12|International Business Machines Corporation|Dynamically recommending configuration changes to an operating system image| US9208041B2|2012-10-05|2015-12-08|International Business Machines Corporation|Dynamic protection of a master operating system image| US8990772B2|2012-10-16|2015-03-24|International Business Machines Corporation|Dynamically recommending changes to an association between an operating system image and an update group| US9135436B2|2012-10-19|2015-09-15|The Aerospace Corporation|Execution stack securing process| GB2507338A|2012-10-26|2014-04-30|Ibm|Determining system topology graph changes in a distributed computing system| US8997088B2|2012-11-02|2015-03-31|Wipro Limited|Methods and systems for automated deployment of software applications on heterogeneous cloud environments| US20140181138A1|2012-12-24|2014-06-26|International Business Machines Corporation|Estimating risk of building a computing system| US9256700B1|2012-12-31|2016-02-09|Emc Corporation|Public service for emulation of application load based on synthetic data generation derived from preexisting models| US10671418B2|2013-01-09|2020-06-02|Red Hat, Inc.|Sharing templates and multi-instance cloud deployable applications| US10025580B2|2013-01-23|2018-07-17|Dell Products L.P.|Systems and methods for supporting multiple operating system versions| US8677359B1|2013-03-14|2014-03-18|Joyent, Inc.|Compute-centric object stores and methods of use| US8881279B2|2013-03-14|2014-11-04|Joyent, Inc.|Systems and methods for zone-based intrusion detection| US9104456B2|2013-03-14|2015-08-11|Joyent, Inc.|Zone management of compute-centric object stores| US8943284B2|2013-03-14|2015-01-27|Joyent, Inc.|Systems and methods for integrating compute resources in a storage area network| US8826279B1|2013-03-14|2014-09-02|Joyent, Inc.|Instruction set architecture for compute-based object stores| US8793688B1|2013-03-15|2014-07-29|Joyent, Inc.|Systems and methods for double hulled virtualization operations| US9690607B2|2013-03-15|2017-06-27|Oracle International Corporation|System and method for generic product wiring in a virtual assembly builder environment| US8775485B1|2013-03-15|2014-07-08|Joyent, Inc.|Object store management operations within compute-centric object stores| US9092238B2|2013-03-15|2015-07-28|Joyent, Inc.|Versioning schemes for compute-centric object stores| US9268549B2|2013-03-27|2016-02-23|Vmware, Inc.|Methods and apparatus to convert a machine to a virtual machine| CN103324474B|2013-05-22|2016-08-03|中标软件有限公司|Based onLinux OS across the method for System structure ISO and module| US9390076B2|2013-06-06|2016-07-12|Microsoft Technology Licensing, Llc|Multi-part and single response image protocol| US10489175B2|2013-06-10|2019-11-26|Amazon Technologies, Inc.|Pre-configure and pre-launch compute resources| US9632802B2|2013-06-14|2017-04-25|Sap Se|Automatic configuration of mobile programs| CN104253831B|2013-06-26|2018-05-11|国际商业机器公司|A kind of method and system for being used for the application deployment in cloud computing environment| US9990189B2|2013-07-03|2018-06-05|International Business Machines Corporation|Method to optimize provisioning time with dynamically generated virtual disk contents| US9043576B2|2013-08-21|2015-05-26|Simplivity Corporation|System and method for virtual machine conversion| CN103607426B|2013-10-25|2019-04-09|中兴通讯股份有限公司|Security service customization method and device| US20170155639A1|2014-06-10|2017-06-01|Alcatel Lucent|Secure unified cloud storage| JP2016071562A|2014-09-29|2016-05-09|富士通株式会社|Determination program, method, and apparatus| US20170302531A1|2014-09-30|2017-10-19|Hewlett Packard Enterprise Development Lp|Topology based management with compliance policies| US11159385B2|2014-09-30|2021-10-26|Micro Focus Llc|Topology based management of second day operations| JP6451750B2|2015-02-03|2019-01-16|日本電気株式会社|Virtual network system, virtual network control method, virtual network function database, integrated control device, control device, control method and control program therefor| US9569249B1|2015-09-08|2017-02-14|International Business Machines Corporation|Pattern design for heterogeneous environments| US9983796B2|2015-09-17|2018-05-29|Veritas Technologies Llc|Systems and methods for provisioning frequently used image segments from caches| CN108351773B|2015-10-26|2021-11-05|惠普发展公司,有限责任合伙企业|Cloud platform OS management| CN105515933A|2015-11-30|2016-04-20|中电科华云信息技术有限公司|Management method for realizing network function of VMware based on OpenStack| US9823919B2|2015-12-30|2017-11-21|Microsoft Technology Licensing, Llc|Controlled deployment of application feature in mobile environment| US9626166B1|2016-01-26|2017-04-18|International Business Machines Corporation|Common secure cloud appliance image and deployment| US10423331B2|2016-02-02|2019-09-24|Samsung Electronics Co., Ltd.|Polymorphic storage devices| US10498726B2|2016-03-22|2019-12-03|International Business Machines Corporation|Container independent secure file system for security application containers| US11231912B2|2016-12-14|2022-01-25|Vmware, Inc.|Post-deployment modification of information-technology application using lifecycle blueprint| US11231910B2|2016-12-14|2022-01-25|Vmware, Inc.|Topological lifecycle-blueprint interface for modifying information-technology application| US10331973B2|2017-06-26|2019-06-25|Nicira, Inc.|System and method for deploying graphical diagram topologies| US10867067B2|2018-06-07|2020-12-15|Cisco Technology, Inc.|Hybrid cognitive system for AI/ML data privacy| US10904099B2|2018-09-07|2021-01-26|Cisco Technology, Inc.|Formal model checking based approaches to optimized realizations of network functions in multi-cloud environments| CN109861850B|2019-01-11|2021-04-02|中山大学|SLA-based stateless cloud workflow load balancing scheduling method| KR102162834B1|2019-11-25|2020-10-08| 이노트리|System and method using topology map for multicloud or hybrid cloud| US11269609B2|2020-04-02|2022-03-08|Vmware, Inc.|Desired state model for managing lifecycle of virtualization software| US11194561B1|2020-07-08|2021-12-07|Vmware, Inc.|System and method for generating and recommending desired state of virtualization software|
法律状态:
2019-01-08| B06F| Objections, documents and/or translations needed after an examination request according art. 34 industrial property law| 2019-08-06| B06U| Preliminary requirement: requests with searches performed by other patent offices: suspension of the patent application procedure| 2020-10-06| B09A| Decision: intention to grant| 2020-12-22| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 10 (DEZ) ANOS CONTADOS A PARTIR DE 22/12/2020, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US12/651,277|US8984503B2|2009-12-31|2009-12-31|Porting virtual images between platforms| US12/651,277|2009-12-31| PCT/EP2010/069569|WO2011080063A1|2009-12-31|2010-12-14|Porting virtual machine images between platforms| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|