专利摘要:
cross-cloud error tracking and analysis. The present invention relates to a cloud control system that is described herein that provides the ability for an application to reach two or more clouds, while allowing operation, control and error analysis of the distributed application as a single application. the system provides infrastructure that communicates across data centers for execution and for centralizing knowledge of instances of an application that are running in different locations. the infrastructure provided by the system monitors both applications and connections between the clouds, with intelligence to know if the issues are inside the application, or because of the connection between the clouds. the system coordinates control functions across multiple platforms/cloud locations. in this way, the cloud control system creates a single monitoring and error analysis interface and structure knowledge and execution across multiple clouds so that applications that span across multiple clouds can be monitored, controlled, and debugged more easily.
公开号:BR112013029716B1
申请号:R112013029716-6
申请日:2012-05-18
公开日:2021-08-17
发明作者:Kannan C. Iyer;Eric B. Watson
申请人:Microsoft Technology Licensing, Llc;
IPC主号:
专利说明:

BACKGROUND
[0001] Data centers provide servers to run large applications. Companies often use data centers to perform business functions such as sales, marketing, human resources, billing, product catalogs, and so on. Data centers can also run customer-facing applications such as web sites, web services, email hosts, databases, and many other applications. Data centers are typically built by determining an expected peak load, and providing servers, network infrastructure, cooling, and other resources to handle the peak load level. Data centers are known to be very costly, and to be underutilized in non-peak times. They also involve a relatively high control expense in terms of both equipment and personnel for monitoring and performing maintenance on the data center. Because almost every company uses a data center of some kind, there are many redundant functions performed by organizations across the world.
[0002] Cloud computing has emerged as an optimization of the traditional data center.
[0003] A cloud is defined as a set of resources (eg processing, storage, or other resources) available through a network that can serve at least some traditional data center functions for an enterprise. A cloud often involves an abstraction layer, such that applications and cloud users cannot know what specific hardware the applications are running on, where the hardware is located, and so on. This allows the cloud operator some additional freedom in terms of resource turnover in and out of service, maintenance, and so on. Clouds can include public clouds, such as MICROSOFT TM Azure, Amazon Web Services, and others, as well as private clouds, such as those provided by Eucalyptus Sistemas, MICROSOFT TM, and others. Companies have begun to offer appliances (for example, the MICROSOFT TM Azure Appliance) that companies can place in their own data centers to connect the data center with varying levels of cloud functionality.
[0004] Companies with data centers incur substantial costs in building large data centers, even when cloud-based resources are leveraged. Companies often still plan for "most unfavorable" peak scenarios and thus include a quantity of hardware at least some of which is rarely used or underutilized in terms of extra processing power, extra storage space, and so on. This extra amount of resources incurs a high cost for little return. Users who use cloud-based computing on premise expect to be able to use capacity in another compatible cloud (eg, a second instance of itself in another location, Microsoft public cloud, and so on) at capacity times for disaster recovery scenarios, or just for capacity control. It's something that is much less costly than building the most favorable scenario and then duplicating for redundancy. In addition, they expect to be able to control (eg, analyze errors, operate) applications split across multiple clouds. Today, applications, cloud control, and error analysis do not operate across clouds or other data centers. SUMMARY
[0005] A cloud control system is described here that provides the ability for an application to reach two or more clouds (which can be across large distances), while allowing operation, control, and error analysis of the distributed application as a single application. The system provides infrastructure that communicates across data centers for execution and for centralizing knowledge of instances of an application that are running in different locations. In some cases, the system provides a computing device that a company can place in its own private data center that allows an administrator to distribute at least some application loads to a public cloud or other separate locations, while providing control unified, via the computing device. The infrastructure provided by the system monitors both the application and connections between the clouds, with intelligence to know if the emissions are inside the application, or because of the connection between the clouds. The system coordinates control functions across multiple platforms/cloud locations. If an administrator wants to debug the application, the system allows live debugging in the correct location through a unified interface. In this way, the cloud control system creates a single monitoring and error analysis interface and knowledge and execution of "framework" across multiple clouds so that applications that span across multiple clouds can be monitored, controlled , and debugged more easily.
[0006] This Abstract is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Abstract is not intended to identify key features or essential characteristics of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. BRIEF DESCRIPTION OF THE DRAWINGS
[0007] Figure 1 illustrates an application running in two clouds with associated control infrastructure, in one embodiment.
[0008] Figure 2 is a block diagram that illustrates components of the cloud control system, in an embodiment.
[0009] Figure 3 is a flowchart that illustrates cloud control system processing to handle a request from a control tool to access data from distributed application instances, in one embodiment.
[00010] Figure 4 is a flowchart illustrating the cloud control system processing to report data back and handle error analysis requests at a location of a remote application instance, in one embodiment. DETAILED DESCRIPTION
[00011] A cloud control system is described here, which provides the ability for an application to reach two or more clouds (which can be across large distances), while allowing operation, control, and error analysis of the distributed application as a unique application. The system provides infrastructure that communicates across data centers for execution and for centralizing knowledge of instances of an application that are running in different locations. For example, the system can centralize logging, performance tracking, and other control functions, regardless of where the application is running. In some cases, the system provides a computing device that a company can place in its own private data center that allows an administrator to distribute at least some application loads to a public cloud or other separate locations, while providing unified control, via the computing device.
[00012] The infrastructure provided by the cloud control system monitors both the application and connections between the clouds, with intelligence to know if the issues are inside the application, or because of the connection between the clouds. The system coordinates control functions across multiple cloud platforms/locations (from one cloud infrastructure, tasks are coordinated to run across two or more clouds). If an administrator wants to debug the application, the system allows live debugging in the correct location through a unified interface. In this way, the cloud control system creates a single monitoring and error analysis interface, and knowledge and execution of "framework" across multiple clouds so that applications that span across multiple clouds can be monitored, controlled, and debugged more easily.
[00013] Figure 1 illustrates an application running in two clouds with associated control infrastructure, in one embodiment. In some embodiments, the cloud control system involves the application (and/or the administrator) using infrastructure in a cloud that has data/data access in all locations to be able to fully monitor/analyze application errors. As an example, consider an application with instances running in two clouds, cloud 110 and cloud 150, as shown in Figure 1. Cloud 110 includes a MICROSOFT TM Azure 120 appliance instance that includes infrastructure 130. The appliance instance 120 includes application instance 125 that is playing role 140 and role 145. A second cloud 150 includes application instance 155 that is playing role 160 and role 170. Second cloud 150 also includes infrastructure 180. Appliance instance 120 knows about each of the roles, and that they are part of the same application. The infrastructure that channels at each location allows the appliance instance 120 to retrieve information about the role 160 and the role 170 that run in the second cloud 150. The system can distribute either individual roles, entire applications, or both. With all the control data (eg records from applications, machines and infrastructure), the system can assess the application's health just as if all of the roles were local by applying pre-defined health rules. The system can also see the health of the infrastructure across both locations, as well as the connection 190 in between assessing whether a problem is occurring with the application or the infrastructure/network.
[00014] Similarly, when automatic or manual error assessment or remediation steps are required, infrastructure 130 in cloud 110 can coordinate with infrastructure 180 in cloud 150 to provide error assessment and debugging support. For example, system structure can be achieved through locations to run, and a broad application update, hang, and so on. Those skilled in the art will recognize numerous ways to perform cross-location control. For example, infrastructure 130 may directly control infrastructure 180, infrastructure 130 may request infrastructure 180 to run on behalf of infrastructure 130, and so on. Likewise, with operator/administrator error analysis tools (eg monitoring view, alerting, logging and viewing configuration data, and so on), the location of applications and infrastructure is available and logically revealed, but does not involve separate tools and mental gymnastics from the administrator put together. For example, when analyzing errors and viewing data on all roles, if the administrator's next step 105 is using one or more tools 195 to view the records of or initiate a remote session to the role instance, the system turns on the administrator 105 directly, regardless of which location the role is residing.
[00015] The design of the cloud control system provides simplified and consistent execution of a service across multiple clouds/location. The system moves the definition of "a computing resource" from a server beyond a data center to a portion of the internet (the data centers and the connection between them). This allows service level agreements (SLAs) to be defined, monitored, and controlled at the service level - which is what service owners often care most about.
[00016] In some embodiments, the cloud control system operates in cooperation with a cloud migration system that migrates applications from one location to another as needed, called an overflow. The cloud migration system provides capacity control, and disaster recovery by detecting peak load conditions, and automatically moving compute to another source (and back), and by provisioning compute across two or more clouds, and moving completely to one in the event of a disaster at one location. This allows companies to plan local resources for a sustained level of load, and to leverage cloud-based resources to peak other unusual loads. In many cases, a company's business is such that a particular time of year is more crowded, and extra resources may only be needed during those times. For example, tax planning companies are particularly busy in mid-April, e-commerce sites experience holidays running around Thanksgiving and Christmas, and so on. The cloud migration system monitors loads within a data center, and detects a threshold that indicates the current load is close to the data center's capacity. For example, the system can monitor central processing unit (CPU) usage, memory usage, storage usage, network bandwidth, and other metrics to determine how well the data center is handling current load. The system can also look at trends (for example, a resource usage acceleration rate) to determine if the threshold has been, or will soon be, reached.
[00017] After detecting that the limit will be reached, the cloud migration system facilitates an orderly movement of at least some data center loads to another data center, or cloud-based resources. For example, the system might migrate some peak load to a public cloud. Because cloud pricing models can vary, the system may factor cost into the decision. For example, the system may prefer to host as much load as possible in the company's data center to reduce cost, while leveraging cloud resources only to the extent necessary to satisfy customer requirements. The system can also provide control and monitoring tools that provide a consistent experience for information technology (IT) personnel, regardless of where particular loads run (eg, locally within the enterprise or publicly using a cloud). The system can also provide planning tools to help decide appropriate operating loads or applications for movement to other resources during high loads. For example, applications may have various compliance/regulatory or network/design limitations that make them more or less suitable for migration. The system can also be used as a disaster recovery architecture at a data center/network level to control faster operating load transition in the event of a disaster. If a data center resource permanently fails, the system can quickly and efficiently migrate additional load to the cloud, or other resources so that data center customers are unaffected or less affected by the failure. In this way, the cloud migration system allows companies to build data centers smaller and more efficient than other resources leveraged for rare extra loads.
[00018] The cloud control system works with the cloud migration system to provide error control and evaluation as applications are migrated from one location to another. As described above, the cloud migration system can move resources between a data center and the cloud on a temporary (ie, overflow) or permanent (ie, disaster recovery) basis. Temporary movements include an application or other load overflowing for a short period of time to handle a spike or other high load that exceeds the capacity of the data center. A temporary movement can include overflowing a full application or dividing the application load across two or more locations. Permanent moves include longer-term migration of loads due to a hardware failure in the data center, a more sustained increase in required capacity, a desire to globally distribute a dynamic load-balanced application, and so on. Following are several example scenarios where the system could be used by a company.
[00019] In the first example, a company issues application load to a public cloud to control capacity. Business decision makers (ie, CEO, CFO, or VP Sales/Sales) and data center admin systems decide that it would be more cost effective and would provide a better user experience by issuing some operation to the public cloud in its peak day level of three extremes of usage/traffic per year, and maintains its own data center (potentially with a cloud appliance) at its peak monthly usage level. They agree to sign the deal with the cloud provider to issue cloud operation and project estimates of when and how the operation would be. Your account is defined, and the information admitted to the cloud device. During a planning phase, the administrator runs a test with a test application from the cloud provider that ensures the connection is operating correctly. The administrator then adjusts capacity values (eg, threshold) to start issuing applications that maintain capacity at the level specified in a capacity control tool. The administrator goes to the tool to further specify the applications that are eligible to move in this situation (eg no regulatory issue with temporary move, good technical fit).
[00020] The day comes when usage exceeds limits, and the system automatically moves applications to the public cloud. Alerts are triggered on monitoring/usage systems when capacity is within 5% of the issue to be started, when the system does not issue, which system issues it, and when the system brings applications back. An explicit record is kept of all compute and/or moved storage resources, and the administrator is prompted to go to their public cloud account for billing. A review of emission parameters and applications labeled as mobile are reviewed at regular capacity planning meetings in the company's data center and control group.
[00021] In a second example, a company splits applications across clouds to control capacity. This scenario is similar to the scenario above, except for the moved application type to be more complex, so it is split to prioritize differently. The company decides to have a relationship with the cloud provider to split the applications in the cloud (a form of issuance). In this case, a large application was pre-identified as an issue candidate. When capacity reaches the limit, 50 of the 100 operator instances are automatically moved to the public cloud. The application is now split across two appliance instances or cloud instances, with all monitoring and billing data being sent to the starting instance so that it can be centrally controlled. A cloud appliance in the company's own data center has error assessment tools to help debug possible split application issues (eg network issues, network bandwidth/latency emissions, frame communication, and so on). When the capacity situation decreases on the device, the instances of 50 operators are moved back to the device, and it is an application that works normally again.
[00022] In another example, a cloud provider decides to issue from one group to another. The public cloud capacity planning team decides that a group in the Chicago data center is critically full but wants to keep utilization high. They set emission to an underutilized group in a West Coast data center when utilization reaches 90%. The administrator goes to the capacity control tool, and chooses the appropriate users/applications (eg, with low data usage) to be candidates to move. The day comes when the Chicago group's usage reaches its limit, and the system automatically moves the selected applications (eg 10% of the group's applications) to the West Coast data center for one day. As usage returns below the threshold, the system moves applications back to Chicago. The system notifies a proactively assigned monitoring team from the group to be able to respond to user questions.
[00023] In another example, the system is used for cross-cloud portfolio control. A company decides that to efficiently manage capacity on their cloud device, they want to put all their variable demand applications in a public cloud and their constant demand applications on the device or on-premises data center resources (and thus be able to run the device at higher usage). While they want their computing resources split, they still want a global view across all of their application health, to have their applications control application developers the same way, and to maintain a single view of departmental billing across both (for example, what costs to allocate to consumer sales groups, IT internal IT, B2B sales, and so on). The company is able to define aggregation accounts with the public cloud with the same groups as the appliance, and get billing data to integrate on its side. Similarly, they are able to get application programming interface (API) access to the public cloud monitoring data for the platform, their applications are running, as well as application-level monitoring, so that their network operation center (NOC) has a complete and consistent view of the state of a company's computing activity.
[00024] In another example, a company defines a globally distributed application with dynamic load balancing. An enterprise user wants to control capacity across two or more cloud instances, and has a significant amount of their load in independent instances, but geographically distributed instances (eg Bing search with a US data center and United Kingdom, which both serve German issues). Under normal circumstances, a global traffic controller sends 50% of traffic to each location. When the load gets high at the primary location, the system instructs the load balancer to send 75% of the traffic to the UK system, thereby freeing the capacity of the US cloud instance, bringing it to acceptable levels. When the capacity returns to normal, the system informs the load balancer to return to the 50/50 split. A variation of this is for the public cloud to be used as a secondary data center (say 1% load, the user's location with the device to be the other 99%). In the event of a disaster or other reason to move the load from the user's location, 100% traffic is moved to the public cloud.
[00025] In another example, a company has reached its data center capacity and needs extra computing resources, but does not yet have the capital available to expand the data center. In this case, the company can use a public cloud for reporting until they can get the hardware purchase completed.
[00026] Figure 2 is a block diagram that illustrates the components of the cloud control system, in one embodiment. System 200 includes a location control component 210, location data storage 220, tool interface component 230, one or more control tools 240, a data migration component 250, an error analysis component 260, and a billing component 270. Each of these components is described in additional detail here.
[00027] Location control component 210 controls information about multiple data center locations in which instances of an application are running. Component 210 includes information describing how to reach each location, connections available for retrieving control information, user accounts for use at each location with associated security credentials, application and data center components from which to join analysis information errors and send commands for error analysis, and so on. Location control component 210 receives information describing any migration of application loads or emission from one data center/cloud to another, and updates the controlled information so that component 210 has a complete picture of all locations where the application is running. This allows the system 200 to present the complete picture, and produce uniform application control, no matter where or in how many locations applications are running. As changing conditions and applications are distributed, location control component 210 can present control tools with a comprehensive set of control data.
[00028] Location data store 220 stores information describing locations in which instances of the application are running. Data storage 220 may include one or more files, file system, hard drives, databases, cloud-based storage services, or other facilities for persisting information between sessions with system 200. The stored information may include connection information, user roles, control data sources, available log files, and any other information related to control and error evaluation of distributed applications at multiple locations.
[00029] Tool interface component 230 provides an interface to system 200 through which one or more tools can access control and error analysis information for the application. The interface may include one or more web pages, web services, application programming interfaces (APIs), or other interfaces through which an administrator or tools can directly or programmatically access system 200 control and error analysis information. In some embodiments, tool interface component 230 provides an initial connection point for tools to access application-related information on a cloud computing appliance located within an enterprise's private data center. The appliance can handle migration and distribution of application loads to a public cloud, or other data center, and provides a central point of contact for tools that gather control information, or provide application error analysis.
[00030] One or more control tools 240 connect to tool interface component 230 to access control information, or perform application error analysis. Tools can include log viewers, which report tools, tool debuggers, or other tools that reveal information about, or aid in troubleshooting, problems with a running application. Control tools 240 may include tools designed to operate with a local application, and system 200 provides the tools with information describing a distributed application running at multiple locations without knowledge of the tool. This allows for the existence of tools that administrators trust to be used even as automatic application load migration is introduced into a data center or cloud. In other cases, tools may be specifically written to understand distributed applications, and to provide specific control information, or error evaluation related to multiple locations. Tool interface component 230 can provide multiple interfaces through which control tools 240 connect to system 200 using paradigms that are understood by each tool.
[00031] Data migration component 250 migrates control information at one or more remote locations where the application is running back to an application's home location. The home location may include a private data center, location or cloud computing device, or other location where the application normally runs under constant conditions. After reaching a certain load level (eg peak or periodic emissions), the application may migrate some load to one or more other data centers or public clouds to help satisfy customer requirements. These other locations generate control data, such as log files, transaction data, and so on, just similar to the home location, and the data migration component 250 migrates this data back to the home location, or provides access to data from the home location so that the control tools 240 can provide administrators with a comprehensive view of application activity.
[00032] Error analysis component 260 performs application error analysis tasks at one or more locations. Error evaluation can include debugging, processing test data, or other ways of determining whether an application is operating correctly. Error assessment is generally well understood in the home location, but it becomes more complex as the application begins to reach multiple data centers or clouds. The cloud control system 200 insulates the control tools 240 and administrators from this complexity by providing a uniform interface through which the tools and administrators access control information and perform error assessment at multiple locations. Thus, if a control tool allows an administrator to put a breakpoint on, or receive trace information for a particular piece of application code at the home location, then the error analysis component 260 just makes it easier to do so. in a remote cloud-based instance of the application. The tools and administrator may still not be aware of all locations where the application is running, but can still perform control tasks as if the application were only running at the home location.
[00033] Billing component 270 reports billing information related to one or more locations where an application is running. A common control task is to control computing costs, and public clouds often load based on metrics related to operating load (eg, compute time, used storage space, and so on). It can be useful for an administrator to gather an overview of the costs that application instances are incurring in various locations, and the cloud control system 200 can optionally provide the billing component 270 to gather this type of information so that the information can be reported through the control and reporting tools.
[00034] The computing device on which the cloud control system is implemented may include a central processing unit, memory, input devices (eg keyboard and pointing devices), output devices (eg display devices ), and storage devices (eg, disk drives, or other non-volatile storage media). Memory and storage devices are computer-readable storage media that can be encoded with computer-executable instructions (eg, software) that implement or enable the system. In addition, data structures and message structures can be stored and transmitted, via a data transmission medium, such as a signal on a communication link. Various communication links can be used, such as the Internet, a local area network, a wide area network, a point-to-point telephone connection, a cell phone network, and so on.
[00035] System embodiments can be implemented in various operating environments that include personal computers, server computers, handheld devices, or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics, cameras digital, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, decoders, systems on a chip (SOCs), and so on. Computer systems can be cell phones, personal digital assistants, smartphones, personal computers, programmable consumer electronics, digital cameras, and so on.
[00036] The system can be described in the general context of computer executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, and so on, that perform particular tasks or implement particular summary data types. Typically, the functionality of the program modules can be combined or distributed as desired in various embodiments.
[00037] Figure 3 is a flowchart that illustrates the processing of the cloud control system to handle a request from a control tool to access data from distributed application instances, in one embodiment. Beginning at block 310, the system receives from a control tool a request to access control data related to instances of application execution in one or more data centers. For example, a performance monitoring tool might request state information describing how many client requests the application is handling, the application's resource usage, or other information from the application. The system can receive the tool request through a system API exposed to the tools for requesting control data. The API can comprise a uniform interface to access control data regardless of where or in how many locations application instances are running.
[00038] Continuing with block 320, the system identifies one or more types of control data that satisfy the received request. For example, the system can determine the request queries for registration information that is produced by each instance of the application.
[00039] The identification of the requested data allows the system to determine which information to gather from each application instance, or if the data is already gathered locally from the data sent to a central location by each application instance.
[00040] Continuing with block 330, the system determines an application distribution that includes two or more application instances. The distribution determines where the application is running, and where the system will find control data to satisfy the request. The system may include a data store that tracks information describing each issue, or other application load migration to and from other data centers, so that the system is aware of each location where application instances are running. Upon receipt of the requisition control tool, this information allows the system to determine where to gather control data from.
[00041] Continuing in block 340, the system gathers control data to satisfy the request of each distributed application instance. Instances can include an instance in a local private data center, a remote private data center, a private cloud computing facility, a public cloud computing facility, idle resources offered by other private data centers, and so on. against. The system contacts each application instance, or accesses the information previously sent from each instance that contains information (such as performance data, failures, and so on) to satisfy the request received from the control tool.
[00042] Continuing in block 350, the system optionally sends one or more commands for error analysis to one or more remote application instances. For example, if a location is experiencing failures, the administrator can use a control tool to request additional trace information, to send one or more test requests, or to perform other types of debugging. Remote application instances issue the error analysis commands and report the requested data back to a central location where the control tool can access the information.
[00043] Continuing in block 360, the system unifies the gathered data to provide a uniform response to the request received from the control tool. As such, the control tools do not need to be written to include an understanding of the various potential distributions of applications controlled by the tools. The system can thus freely migrate the application from location to location, or to multiple locations as needed, to handle application loads, while still providing administrators with a simpler control and error evaluation experience.
[00044] Continuing with block 370, the system reports the gathered and merged control data in response to the request received from the control tool. The system can send the data through the interface on which the request was received, or through a notification interface, or other facility to provide data to the tool. After block 370, these steps complete.
[00045] Figure 4 is a flowchart illustrating the cloud control system processing to report data back from, and handle error evaluation requests at a location of a remote application instance, in one embodiment. Beginning at block 410, the system receives control data in a remote application instance that handles a portion of the load generated by requests from the application's clients. Control data may include performance data, log information, error details, statistical information, sales history, or other indications of application operation useful for application control.
[00046] Continuing with block 420, the system determines a home application location where an administrator can access control data reported by multiple application instances running at remote distributed locations. The application instance can receive configuration information from the home location after instance creation that specifies where the home location can be contacted, and that the application instance is a remote instance of the application. The system can migrate applications to multiple locations to handle peak loads, perform low-priority tasks in locations where processing is off-peak, and thus less costly, or for other reasons determined by an administrator. The application can have a home location which is where the application normally runs, and it can handle peak or other loads at one or more remote distributed locations.
[00047] Continuing at block 430, the system sends the control data received from the remote application instance to the determined home location of the application. The system can periodically migrate data generated on the distributed instances back to the home location so that control data is available in one location at the home location for the convenience of administrators and control tools. The system can also migrate data on demand, or as required by various tools (see, for example, Figure 3). In some cases, the system can issue application payloads to remote locations for short durations and then collect information related to application execution when payloads are migrated back to the home location and remote instances are terminated.
[00048] Continuing in block 440, the system optionally receives an error evaluation request from a control tool executed at the home location for error evaluation in the remote application instance. Error evaluation requests can include debug breakpoints, a request for detailed trace information, or other commands or requests to perform error evaluation actions.
[00049] Continuing at block 450, the system performs one or more error evaluation actions in response to the received error evaluation request. The action may include adjusting a breakpoint debug, appearing a logging level, sending test data to the application, or taking any other action specified by the request to determine if the application is operating correctly.
[00050] Continuing at block 460, the system sends an error evaluation result to the home location in response to the received error evaluation request. By providing a facility for running error analysis commands remotely, the system allows the error assessment tool to operate in the home location for error assessment application instances, no matter where the instances are running, and allows allow the system to migrate from application instances to multiple locations without interrupting an administrator's ability to track and evaluate application errors. After block 460, these steps complete.
[00051] In some embodiments, the cloud control system migrates the application load by modifying Domain Name Service (DNS) records. The system can modify a DNS server to point requests coming from the client to one or more Retargeting Internet Protocol (IP) addresses for distant direct payloads from a source data center to a target data center/cloud. A global traffic controller (GTM) often points clients to the nearest server to handle their requests, and these solutions can be modified to redirect traffic based on load, or other conditions. That way, when a data center becomes overloaded, or close to capacity, the system can tell the GTM to direct at least some customer requests to a new location that can handle the excess load. Similarly, the system can provide a DNS or other address to which control tools can address control requests, and are connected to application instances, no matter where they reside.
[00052] In some embodiments, the cloud control system migrates record and other data back from the target computing resources after the migration conditions have eased. For example, following a peak load period, the system can migrate all application loads back to the original data center, and can absorb information generated in the target data center, such as application records, back to the original data center for further analysis. For some applications, tracking customer requests can be a matter of regulatory compliance or simply useful for debugging and reporting. In either case, consolidating records at the source location can be part of a successful migration back to the source location.
[00053] In some embodiments, the cloud control system allocates a dynamically variable amount of application load between a source computing resource and one or more target computing resources. For example, the system can dynamically route requests to keep the source computing resource at or near full capacity, and only send requests out of external computing resources that the source computing resource cannot successfully handle. Such decisions can be a matter of cost, data security, or other considerations to migrate out as little application load is needed, or place application loads where they can be performed less costly, or more efficiently. In some cases, decisions may be based on regulatory application requirements. For example, applications target healthcare, or other record keeping laws may have restrictions on the data centers/clouds they can operate in.
[00054] In some embodiments, the cloud control system provides several options for disaster recovery. In some cases, the system might list resources in an external data center to monitor a main data center for outage. If the external data center becomes unable to reach the main data center, then the external data center can determine that a disaster has occurred, and move application loads to the external data center. In previous systems, it was typical for an organization to maintain 200% of the capacity needed (at substantial expense) in order to successfully handle disasters. With the cloud control system, the organization can keep a lower amount of available capacity in a second location (eg 10%), and can quickly order more as needed in the event of a failure.
[00055] Analogously secure, the probability of all clients of a cloud provider failing at the same time, and requiring a high spare capacity, is low, such that multiple clients can share a set of redundant secondary resources to be used in case of a failure of primary resources.
[00056] The system can also realign the control tools and error evaluation features to point to the new location following disaster recovery so that control remains uninterrupted.
[00057] From the foregoing, it will be appreciated that specific embodiments of the cloud control system have been described herein for purposes of illustration, but that various modifications can be made without departing from the spirit and scope of the invention. Accordingly, the invention is not limited except by the appended claims.
权利要求:
Claims (14)
[0001]
1. Computer-implemented method for handling a request by a management tool (240) to access application management data from distributed application instances (125, 155), characterized by the fact that it comprises the steps of: receiving (310), from an application management tool (240), a management tool request (240) to access management data related to an application running instances in one or more data centers, in which the receipt of the request management tool (240) comprises receiving a request from a performance monitoring tool to access status information describing an operation of one or more instances of applications; identifying (320) one or more types of data that fulfill the received request; determine (330) an application distribution that includes two or more instances of the application; go (340) management data to satisfy the request of each distributed application instance; merge (360) the collected management data to provide a uniform response to the received management tool request (240); and reporting (370) the gathered and unified management data in response to the received management tool request (240), wherein the foregoing steps are performed by at least one processor.
[0002]
2. Method according to claim 1, characterized in that the step of receiving the management request comprises receiving the request from the management tool (240) through an exposed programmatic application programming interface (API) to tools for requesting management data.
[0003]
3. Method, according to claim 2, characterized by the fact that the API comprises a uniform interface to access management data without requiring tools to understand where or in how many locations the application instances are running.
[0004]
4. Method according to claim 1, characterized in that the step of identifying one or more types of management data comprises determining that the management tool request (240) requests information produced by each instance of the application.
[0005]
5. Method according to claim 1, characterized in that the step of identifying one or more types of management data comprises determining which information to collect from each instance of the application and whether the one or more types of data management is already collected locally from data sent to a central location by each application instance.
[0006]
6. Method according to claim 1, characterized in that the step of determining the application distribution comprises determining where the application is running and where the system will find management data to satisfy the request.
[0007]
7. Method according to claim 1, characterized in that the collection of management data comprises accessing at least one instance in a private data center and at least one instance in a cloud computing facility.
[0008]
8. Method according to claim 1, characterized in that the step of collecting management data comprises the contact of each instance of the application to satisfy the management tool request (240) received.
[0009]
9. Method according to claim 1, characterized in that it further comprises the step of sending one or more troubleshooting commands to one or more instances of remote applications, in which the instances of remote applications execute the commands of troubleshooting and report requested data back to a central location where the management tool (240) can access information associated with the reported requested data.
[0010]
10. Method according to claim 1, characterized in that the step of unifying the collected data comprises data formatting so that management tools do not need to be written to include an understanding of the various potential distributions of applications managed by management tools.
[0011]
11. Method according to claim 1, characterized in that the step of unifying collected data comprises data formatting so that the computer system can freely migrate the application from location to location or to multiple locations, as necessary to handle application loads without disrupting administrators' ability to manage and troubleshoot application issues.
[0012]
12. Method according to claim 1, characterized in that the step of reporting the data comprises sending the data to the management tool (240) through an interface in which the request of the management tool (240 ) has been received.
[0013]
13. Computer system for accessing application management data from distributed application instances, characterized in that it comprises: a processor and memory configured to execute a method embedded within the following components: a location management component ( 210) which manages information about various data center locations where instances of an application are running; a location data store (220) which stores information describing locations where instances (125, 155) of the application are running a tool interface component (230) that provides an interface to the system through which one or more tools can access management information for the application; one or more application management tools (240) that connect to the application component. tool interface to access management information: characterized by the fact that it still understands from the steps of: receiving (310), from one or more application management tools (240), a management tool request (240) to access management data related to the application, where the receipt of the application request management tool (240) comprises receiving a request from a performance monitoring tool to access status information describing an operation of the application instances; identifying (320) one or more types of management data that respond to the received request; determine (330) an application distribution that includes two or more instances of the application based on information stored in the location data store (220); gather (340) management data to satisfy the request from each distributed application instance; unify (360) the collected management data to provide a uniform response to the management tool request. registration (240) received; and reporting (370) the gathered and unified management data in response to the received management tool (240) request.
[0014]
14. System according to claim 13, characterized in that the tool's interface component is further configured to provide an initial connection point for the tools to access information related to the application on a cloud computing device (120 ) located in a private company data center.
类似技术:
公开号 | 公开日 | 专利标题
BR112013029716B1|2021-08-17|COMPUTER IMPLEMENTED METHOD TO HANDLE A REQUEST FOR A COMPUTER MANAGEMENT TOOL AND COMPUTER SYSTEM TO ACCESS APPLICATION MANAGEMENT DATA FROM DISTRIBUTED APPLICATIONS INSTANCES
US10044551B2|2018-08-07|Secure cloud management agent
US9413604B2|2016-08-09|Instance host configuration
US8719627B2|2014-05-06|Cross-cloud computing for capacity management and disaster recovery
US8966025B2|2015-02-24|Instance configuration on remote platforms
US10749773B2|2020-08-18|Determining a location of optimal computing resources for workloads
CA2898478C|2017-11-14|Instance host configuration
Mathews et al.2019|Service resilience framework for enhanced end-to-end service quality
Endo et al.2017|Highly available clouds: system modeling, evaluations, and open challenges
US10303678B2|2019-05-28|Application resiliency management using a database driver
US9092397B1|2015-07-28|Development server with hot standby capabilities
US20210173582A1|2021-06-10|Maintaining namespace health within a dispersed storage network
Shackelford2015|OpsWorks Part II: Databases and Scaling
同族专利:
公开号 | 公开日
WO2012162171A3|2013-02-21|
US9223632B2|2015-12-29|
WO2012162171A2|2012-11-29|
US20160119202A1|2016-04-28|
CN103548009B|2017-02-08|
JP2014515522A|2014-06-30|
MX2013013577A|2014-07-30|
AU2012259086B2|2016-09-22|
BR112013029716A2|2017-01-24|
KR101916847B1|2019-01-24|
MX347110B|2017-04-12|
EP2710484A2|2014-03-26|
CN103548009A|2014-01-29|
EP2710484B1|2020-02-26|
US20120297016A1|2012-11-22|
US10009238B2|2018-06-26|
RU2604519C2|2016-12-10|
KR20140026503A|2014-03-05|
MX366620B|2019-07-16|
JP5980914B2|2016-08-31|
RU2013151607A|2015-05-27|
EP2710484A4|2016-04-06|
CA2835440A1|2012-11-29|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US6862736B2|1998-02-06|2005-03-01|Microsoft Corporation|Object manager for common information model|
US20020120741A1|2000-03-03|2002-08-29|Webb Theodore S.|Systems and methods for using distributed interconnects in information management enviroments|
WO2002010944A1|2000-08-01|2002-02-07|Qwest Communications International Inc.|Performance modeling, fault management and repair in a xdsl network|
CA2319918A1|2000-09-18|2002-03-18|Linmor Technologies Inc.|High performance relational database management system|
JP4542253B2|2000-11-06|2010-09-08|株式会社日本コンラックス|Promotion system|
US7600014B2|2000-11-16|2009-10-06|Symantec Corporation|Method and system for monitoring the performance of a distributed application|
KR100346185B1|2000-12-01|2002-07-26|삼성전자 주식회사|System and method for managing alarm in network management system|
US20020103886A1|2000-12-04|2002-08-01|International Business Machines Corporation|Non-local aggregation of system management data|
US7305461B2|2000-12-15|2007-12-04|International Business Machines Corporation|Method and system for network management with backup status gathering|
US7337473B2|2000-12-15|2008-02-26|International Business Machines Corporation|Method and system for network management with adaptive monitoring and discovery of computer systems based on user login|
US7430594B2|2001-01-26|2008-09-30|Computer Associates Think, Inc.|Method and apparatus for distributed systems management|
JP2002300308A|2001-03-30|2002-10-11|Ricoh Co Ltd|Customer support system, office system, customer support center, supply center and customer support method|
US7010593B2|2001-04-30|2006-03-07|Hewlett-Packard Development Company, L.P.|Dynamic generation of context-sensitive data and instructions for troubleshooting problem events in a computing environment|
US20020161876A1|2001-04-30|2002-10-31|Robert Raymond|System and method for managing data miner modules in an information network system|
JP2002366454A|2001-06-11|2002-12-20|Fujitsu Ltd|Network managing method and its device|
JP2003101586A|2001-09-25|2003-04-04|Hitachi Ltd|Network management support method|
US20050262229A1|2004-04-16|2005-11-24|Samsung Electronics Co., Ltd.|Object conduit MIB for communicating over SNMP between distributed objects|
US20060031446A1|2004-06-17|2006-02-09|Mohamed Hamedi|Gathering network management data using a command line function|
WO2006009210A1|2004-07-21|2006-01-26|Sony Corporation|Content processing device, content processing method, and computer program|
JP2006344025A|2005-06-09|2006-12-21|Hitachi Ltd|Operation performance data acquisition method, performance monitor server, work server, computer, and computing system|
US7356590B2|2005-07-12|2008-04-08|Visible Measures Corp.|Distributed capture and aggregation of dynamic application usage information|
US20070064714A1|2005-09-16|2007-03-22|Sbc Knowledge Ventures, L.P.|Wireless based troubleshooting of customer premise equipment installation|
JP4738144B2|2005-11-28|2011-08-03|株式会社日立製作所|Information monitoring method, system and program|
US20070198554A1|2006-02-10|2007-08-23|Sun Microsystems, Inc.|Apparatus for business service oriented management infrastructure|
US8190682B2|2006-03-31|2012-05-29|Amazon Technologies, Inc.|Managing execution of programs by multiple computing systems|
US7676475B2|2006-06-22|2010-03-09|Sun Microsystems, Inc.|System and method for efficient meta-data driven instrumentation|
US7814114B2|2006-09-05|2010-10-12|Oracle International Corporation|Tree-based information query model|
JP2008217735A|2007-03-08|2008-09-18|Nec Corp|Fault analysis system, method and program|
US8972978B2|2008-05-02|2015-03-03|Skytap|Multitenant hosted virtual machine infrastructure|
US7886038B2|2008-05-27|2011-02-08|Red Hat, Inc.|Methods and systems for user identity management in cloud-based networks|
US20090300423A1|2008-05-28|2009-12-03|James Michael Ferris|Systems and methods for software test management in cloud-based network|
US8239509B2|2008-05-28|2012-08-07|Red Hat, Inc.|Systems and methods for management of virtual appliances in cloud-based network|
US8250215B2|2008-08-12|2012-08-21|Sap Ag|Method and system for intelligently leveraging cloud computing resources|
US7894334B2|2008-08-15|2011-02-22|Telefonaktiebolaget L M Ericsson|Hierarchical redundancy for a distributed control plane|
US8271974B2|2008-10-08|2012-09-18|Kaavo Inc.|Cloud computing lifecycle management for N-tier applications|
US9037692B2|2008-11-26|2015-05-19|Red Hat, Inc.|Multiple cloud marketplace aggregation|
US7996525B2|2008-12-31|2011-08-09|Sap Ag|Systems and methods for dynamically provisioning cloud computing resources|
US8977750B2|2009-02-24|2015-03-10|Red Hat, Inc.|Extending security platforms to cloud-based networks|
US20100220622A1|2009-02-27|2010-09-02|Yottaa Inc|Adaptive network with automatic scaling|
US8751627B2|2009-05-05|2014-06-10|Accenture Global Services Limited|Method and system for application migration in a cloud|
US8290998B2|2009-05-20|2012-10-16|Sap Ag|Systems and methods for generating cloud computing landscapes|
US9104407B2|2009-05-28|2015-08-11|Red Hat, Inc.|Flexible cloud management with power management support|
US20100306767A1|2009-05-29|2010-12-02|Dehaan Michael Paul|Methods and systems for automated scaling of cloud computing systems|
US9325802B2|2009-07-16|2016-04-26|Microsoft Technology Licensing, Llc|Hierarchical scale unit values for storing instances of data among nodes of a distributed store|
US9442810B2|2009-07-31|2016-09-13|Paypal, Inc.|Cloud computing: unified management console for services and resources in a data center|
US8316125B2|2009-08-31|2012-11-20|Red Hat, Inc.|Methods and systems for automated migration of cloud processes to external clouds|
US8769083B2|2009-08-31|2014-07-01|Red Hat, Inc.|Metering software infrastructure in a cloud computing environment|
JP5471198B2|2009-09-03|2014-04-16|株式会社リコー|Integrated management apparatus, integrated management system, integrated management method, integrated management program, and recording medium recording the program|
US20110078303A1|2009-09-30|2011-03-31|Alcatel-Lucent Usa Inc.|Dynamic load balancing and scaling of allocated cloud resources in an enterprise network|
JP2011090429A|2009-10-21|2011-05-06|Nomura Research Institute Ltd|Integrated monitoring system|
US10402544B2|2009-11-30|2019-09-03|Red Hat, Inc.|Generating a software license knowledge base for verifying software license compliance in cloud computing environments|
US8745397B2|2010-01-04|2014-06-03|Microsoft Corporation|Monitoring federation for cloud based services and applications|
US9021046B2|2010-01-15|2015-04-28|Joyent, Inc|Provisioning server resources in a cloud resource|
US20120011077A1|2010-07-12|2012-01-12|Bhagat Bhavesh C|Cloud Computing Governance, Cyber Security, Risk, and Compliance Business Rules System and Method|
US8769534B2|2010-09-23|2014-07-01|Accenture Global Services Limited|Measuring CPU utilization in a cloud computing infrastructure by artificially executing a bursting application on a virtual machine|
EP2695050A4|2011-04-07|2016-03-23|Pneuron Corp|Legacy application migration to real time, parallel performance cloud|
US8977754B2|2011-05-09|2015-03-10|Metacloud Inc.|Composite public cloud, method and system|
US9223632B2|2011-05-20|2015-12-29|Microsoft Technology Licensing, Llc|Cross-cloud management and troubleshooting|US9223632B2|2011-05-20|2015-12-29|Microsoft Technology Licensing, Llc|Cross-cloud management and troubleshooting|
US10353563B2|2011-06-08|2019-07-16|Citrix Systems, Inc.|Methods and system for locally generated gesture and transition graphics interaction with terminal control services|
US8918794B2|2011-08-25|2014-12-23|Empire Technology Development Llc|Quality of service aware captive aggregation with true datacenter testing|
US9026837B2|2011-09-09|2015-05-05|Microsoft Technology Licensing, Llc|Resource aware placement of applications in clusters|
US20130151688A1|2011-12-07|2013-06-13|Alcatel-Lucent Usa Inc.|Optimization mechanisms for latency reduction and elasticity improvement in geographically distributed data centers|
US10503615B2|2011-12-16|2019-12-10|Basen Corporation|Spime™ host system, process, object, self-determination apparatus, and host device|
US9590876B2|2012-03-02|2017-03-07|Payoda Inc.|Centralized dashboard for monitoring and controlling various application specific network components across data centers|
US20130290406A1|2012-04-26|2013-10-31|Salesforce.Com, Inc.|Mechanism for providing a cloud platform for facilitating and supporting user-controlled development and management of user products|
US9264289B2|2013-06-27|2016-02-16|Microsoft Technology Licensing, Llc|Endpoint data centers of different tenancy sets|
US9747311B2|2013-07-09|2017-08-29|Oracle International Corporation|Solution to generate a scriptset for an automated database migration|
US10776244B2|2013-07-09|2020-09-15|Oracle International Corporation|Consolidation planning services for systems migration|
US9491072B2|2013-07-09|2016-11-08|Oracle International Corporation|Cloud services load testing and analysis|
US9792321B2|2013-07-09|2017-10-17|Oracle International Corporation|Online database migration|
US9967154B2|2013-07-09|2018-05-08|Oracle International Corporation|Advanced customer support services—advanced support cloud portal|
US9762461B2|2013-07-09|2017-09-12|Oracle International Corporation|Cloud services performance tuning and benchmarking|
US11157664B2|2013-07-09|2021-10-26|Oracle International Corporation|Database modeling and analysis|
US9805070B2|2013-07-09|2017-10-31|Oracle International Corporation|Dynamic migration script management|
US9996562B2|2013-07-09|2018-06-12|Oracle International Corporation|Automated database migration architecture|
US9509759B2|2013-09-09|2016-11-29|International Business Machines Corporation|Service agreement performance validation in a cloud hosted environment|
US10462018B2|2013-10-03|2019-10-29|Hewlett Packard Enterprise Development Lp|Managing a number of secondary clouds by a master cloud service manager|
WO2015054832A1|2013-10-16|2015-04-23|Empire Technology Development Llc|Two-level cloud system migration|
US9401954B2|2013-11-06|2016-07-26|International Business Machines Corporation|Scaling a trusted computing model in a globally distributed cloud environment|
US9791485B2|2014-03-10|2017-10-17|Silver Spring Networks, Inc.|Determining electric grid topology via a zero crossing technique|
WO2015166509A1|2014-04-30|2015-11-05|Hewlett-Packard Development Company, L.P.|Support action based self learning and analytics for datacenter device hardware/firmware fault management|
US9811365B2|2014-05-09|2017-11-07|Amazon Technologies, Inc.|Migration of applications between an enterprise-based network and a multi-tenant network|
CA2959723A1|2014-05-28|2015-12-03|New Media Solutions, Inc.|Generation and management of computing infrastructure instances|
US10409665B2|2014-06-09|2019-09-10|Northrup Grumman Systems Corporation|System and method for real-time detection of anomalies in database usage|
US10922666B1|2014-06-23|2021-02-16|Amazon Technologies, Inc.|Resource management for logical and physical availability zones of a provider network|
US9606826B2|2014-08-21|2017-03-28|International Business Machines Corporation|Selecting virtual machines to be migrated to public cloud during cloud bursting based on resource usage and scaling policies|
US9680920B2|2014-09-08|2017-06-13|International Business Machines Corporation|Anticipatory resource allocation/activation and lazy de-allocation/deactivation|
WO2016145653A1|2015-03-19|2016-09-22|华为技术有限公司|Fault processing method and device based on network function virtualization|
US9769206B2|2015-03-31|2017-09-19|At&T Intellectual Property I, L.P.|Modes of policy participation for feedback instances|
US9524200B2|2015-03-31|2016-12-20|At&T Intellectual Property I, L.P.|Consultation among feedback instances|
US10277666B2|2015-03-31|2019-04-30|At&T Intellectual Property I, L.P.|Escalation of feedback instances|
US10129157B2|2015-03-31|2018-11-13|At&T Intellectual Property I, L.P.|Multiple feedback instance inter-coordination to determine optimal actions|
US9992277B2|2015-03-31|2018-06-05|At&T Intellectual Property I, L.P.|Ephemeral feedback instances|
US10129156B2|2015-03-31|2018-11-13|At&T Intellectual Property I, L.P.|Dynamic creation and management of ephemeral coordinated feedback instances|
US10728092B2|2015-05-01|2020-07-28|Microsoft Technology Licensing, Llc|Cloud-mastered settings|
US10581670B2|2015-10-02|2020-03-03|Microsoft Technology Licensing, Llc|Cross-data center interoperation and communication|
US10341410B2|2016-05-11|2019-07-02|Oracle International Corporation|Security tokens for a multi-tenant identity and data security management cloud service|
US10581820B2|2016-05-11|2020-03-03|Oracle International Corporation|Key generation and rollover|
US10454940B2|2016-05-11|2019-10-22|Oracle International Corporation|Identity cloud service authorization model|
US9781122B1|2016-05-11|2017-10-03|Oracle International Corporation|Multi-tenant identity and data security management cloud service|
US9838376B1|2016-05-11|2017-12-05|Oracle International Corporation|Microservices based multi-tenant identity and data security management cloud service|
US10425386B2|2016-05-11|2019-09-24|Oracle International Corporation|Policy enforcement point for a multi-tenant identity and data security management cloud service|
US9838377B1|2016-05-11|2017-12-05|Oracle International Corporation|Task segregation in a multi-tenant identity and data security management cloud service|
US10878079B2|2016-05-11|2020-12-29|Oracle International Corporation|Identity cloud service authorization model with dynamic roles and scopes|
US10148740B2|2016-06-03|2018-12-04|Microsoft Technology Licensing, Llc|Multi-service application fabric architecture|
US11036696B2|2016-06-07|2021-06-15|Oracle International Corporation|Resource allocation for database provisioning|
US10834226B2|2016-07-15|2020-11-10|International Business Machines Corporation|Live migration of containers based on geo-location|
US10721237B2|2016-08-05|2020-07-21|Oracle International Corporation|Hierarchical processing for a virtual directory system for LDAP to SCIM proxy service|
US10255061B2|2016-08-05|2019-04-09|Oracle International Corporation|Zero down time upgrade for a multi-tenant identity and data security management cloud service|
US10263947B2|2016-08-05|2019-04-16|Oracle International Corporation|LDAP to SCIM proxy service|
US10516672B2|2016-08-05|2019-12-24|Oracle International Corporation|Service discovery for a multi-tenant identity and data security management cloud service|
US10585682B2|2016-08-05|2020-03-10|Oracle International Corporation|Tenant self-service troubleshooting for a multi-tenant identity and data security management cloud service|
US10530578B2|2016-08-05|2020-01-07|Oracle International Corporation|Key store service|
US10735394B2|2016-08-05|2020-08-04|Oracle International Corporation|Caching framework for a multi-tenant identity and data security management cloud service|
US10484382B2|2016-08-31|2019-11-19|Oracle International Corporation|Data management for a multi-tenant identity cloud service|
US20180075009A1|2016-09-14|2018-03-15|Microsoft Technology Licensing, Llc|Self-serve appliances for cloud services platform|
US10846390B2|2016-09-14|2020-11-24|Oracle International Corporation|Single sign-on functionality for a multi-tenant identity and data security management cloud service|
US10594684B2|2016-09-14|2020-03-17|Oracle International Corporation|Generating derived credentials for a multi-tenant identity cloud service|
US10511589B2|2016-09-14|2019-12-17|Oracle International Corporation|Single logout functionality for a multi-tenant identity and data security management cloud service|
WO2018053258A1|2016-09-16|2018-03-22|Oracle International Corporation|Tenant and service management for a multi-tenant identity and data security management cloud service|
US10341354B2|2016-09-16|2019-07-02|Oracle International Corporation|Distributed high availability agent architecture|
US10791087B2|2016-09-16|2020-09-29|Oracle International Corporation|SCIM to LDAP mapping using subtype attributes|
US10567364B2|2016-09-16|2020-02-18|Oracle International Corporation|Preserving LDAP hierarchy in a SCIM directory using special marker groups|
US10445395B2|2016-09-16|2019-10-15|Oracle International Corporation|Cookie based state propagation for a multi-tenant identity cloud service|
US10484243B2|2016-09-16|2019-11-19|Oracle International Corporation|Application management for a multi-tenant identity cloud service|
US10904074B2|2016-09-17|2021-01-26|Oracle International Corporation|Composite event handler for a multi-tenant identity cloud service|
JP2018060332A|2016-10-04|2018-04-12|富士通株式会社|Incident analysis program, incident analysis method, information processing device, service specification program, service specification method and service specification device|
US10261836B2|2017-03-21|2019-04-16|Oracle International Corporation|Dynamic dispatching of workloads spanning heterogeneous services|
US10454915B2|2017-05-18|2019-10-22|Oracle International Corporation|User authentication using kerberos with identity cloud service|
US10635433B2|2017-08-24|2020-04-28|General Electric Company|Cross application behavior customization|
US10348858B2|2017-09-15|2019-07-09|Oracle International Corporation|Dynamic message queues for a microservice based cloud service|
US10831789B2|2017-09-27|2020-11-10|Oracle International Corporation|Reference attribute query processing for a multi-tenant cloud service|
US10834137B2|2017-09-28|2020-11-10|Oracle International Corporation|Rest-based declarative policy management|
US11271969B2|2017-09-28|2022-03-08|Oracle International Corporation|Rest-based declarative policy management|
US10705823B2|2017-09-29|2020-07-07|Oracle International Corporation|Application templates and upgrade framework for a multi-tenant identity cloud service|
US10447536B2|2017-10-20|2019-10-15|Vmware, Inc.|Managing cross-cloud distributed application|
US10481970B2|2017-11-28|2019-11-19|Bank Of America Corporation|Dynamic cloud deployment and calibration tool|
US10609131B2|2018-01-12|2020-03-31|Citrix Systems, Inc.|Non-disruptive enablement of highly available cloud computing services|
US10715564B2|2018-01-29|2020-07-14|Oracle International Corporation|Dynamic client registration for an identity cloud service|
US10931656B2|2018-03-27|2021-02-23|Oracle International Corporation|Cross-region trust for a multi-tenant identity cloud service|
US11165634B2|2018-04-02|2021-11-02|Oracle International Corporation|Data replication conflict detection and resolution for a multi-tenant identity cloud service|
US10798165B2|2018-04-02|2020-10-06|Oracle International Corporation|Tenant data comparison for a multi-tenant identity cloud service|
US11258775B2|2018-04-04|2022-02-22|Oracle International Corporation|Local write for a multi-tenant identity cloud service|
US11012444B2|2018-06-25|2021-05-18|Oracle International Corporation|Declarative third party identity provider integration for a multi-tenant identity cloud service|
US10764273B2|2018-06-28|2020-09-01|Oracle International Corporation|Session synchronization across multiple devices in an identity cloud service|
US10846070B2|2018-07-05|2020-11-24|At&T Intellectual Property I, L.P.|Facilitating cloud native edge computing via behavioral intelligence|
US11070613B2|2018-08-16|2021-07-20|Microsoft Technology Licensing, Llc|Automatic application scaling between private and public cloud platforms|
CN109240837B|2018-09-11|2020-09-29|四川虹微技术有限公司|Construction method of universal cloud storage service API|
US11113385B2|2018-12-21|2021-09-07|Paypal, Inc.|Communicating trace information between security zones|
US10776158B2|2019-01-31|2020-09-15|Lockheed Martin Corporation|Management of application deployment across multiple provisioning layers|
US11061929B2|2019-02-08|2021-07-13|Oracle International Corporation|Replication of resource type and schema metadata for a multi-tenant identity cloud service|
US11010228B2|2019-03-01|2021-05-18|International Business Machines Corporation|Apparatus, systems, and methods for identifying distributed objects subject to service|
JP2020173498A|2019-04-08|2020-10-22|富士通株式会社|Management device, information processing system, and management program|
US11256671B2|2019-09-13|2022-02-22|Oracle International Corporation|Integrated transition control center|
法律状态:
2017-07-25| B25A| Requested transfer of rights approved|Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC (US) |
2018-12-11| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-10-29| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-03-23| B07A| Application suspended after technical examination (opinion) [chapter 7.1 patent gazette]|
2021-06-08| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-08-17| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 18/05/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US13/111,956|US9223632B2|2011-05-20|2011-05-20|Cross-cloud management and troubleshooting|
US13/111,956|2011-05-20|
PCT/US2012/038647|WO2012162171A2|2011-05-20|2012-05-18|Cross-cloud management and troubleshooting|
[返回顶部]