Technical Architecture of Exchange Server 2013 Service Pack 1

Monday, May 12, 2014 Posted by

Microsoft has released the new Microsoft Exchange Server 2013 Service Pack 1 Architecture Poster. For everyone who like it 🙂

Lync knows: People and Productivity

Tuesday, April 22, 2014 Posted by

Lync knows: Productivity is key to your business

Tuesday, April 22, 2014 Posted by

Microsoft Lync 2013: A smarter way to work

Friday, April 18, 2014 Posted by

Lync in the Classroom

Friday, April 18, 2014 Posted by

Troubleshooting System Center Operations Manager (SCOM) Server performance

Monday, April 22, 2013 Posted by

Microsoft has write an overview for troubleshooting performance for SCOM.

For OpsMgr 2007 and R2 – Root management server (RMS)

Configuration update bursts are caused by management pack imports and by discovery data. When system performance is slow, the most likely bottlenecks are, first, the CPU and, second, the OpsMgr installation disk I/O.

The RMS is responsible for generating and sending configuration files to all affected Health Services.

For Workflow reloading (which is caused by new configuration on RMS), the most likely bottlenecks are the same: the CPU first, and OpsMgr installation disk I/O second. The RMS is responsible for reading the configuration file, for loading and initializing all workflows that run on it, and for updating the RMS HealthService store when the configuration file is updated on the RMS.

For local workflow activity bursts (which is when agents change their availability), the most likely bottleneck is the CPU. If you find that the CPU is not working at maximum capacity, the next most likely bottleneck is the hard disk. The RMS is responsible for monitoring the availability of all agents that are using RMS local workflows. The RMS also hosts distributed dependency monitors that use the disk.

Management server

During a configuration update burst (that is caused by MP import and discovery), the typical bottlenecks are, first, the CPU and, second, the OpsMgr installation disk I/O. The management server is responsible of forwarding configuration files from the RMS to the target agents.

For Operational data collection, bottlenecks are typically caused by the CPU. The disk I/O may also be at maximum capacity, but that is not as likely. The management server is responsible for decompressing and decrypting incoming operational data, and inserting it into the Operational Database. It also sends acknowledgements (ACKs) back to the agents or gateways after it receives operational data, and uses disk queuing to temporarily store these outgoing ACKs. Lastly, the management server will also forward monitor state changes (by using a disk queue) to the RMS for distributed dependency monitors.

Gateway

The gateway is both CPU-bound and I/O-bound. When the gateway is relaying a large amount of data, both the CPU and I/O operations may show high usage. Most of the CPU usage is caused by the decompression, compression, encryption, and decryption of the incoming data, and also by the transfer of that data. All data that is received by the gateway and from the agents is stored in a persistent queue on disk, to be read and forwarded to the management server by the gateway Health service. This can cause heavy disk usage. This usage can be significant when the gateway is taken temporarily offline and must then handle accumulated agent data that the agents generated and tried to send when the GW was still offline.

To troubleshoot the issue in this situation, collect the following information for each affected management server or gateway:

  • Exact Windows version, edition, and build number (for example, Windows Server 2003 Enterprise x64 SP2)
  • Number of processors
  • Amount of RAM
  • Drive that contains the Health Service State folder
  • Whether the antivirus software is configured to exclude the Health Service store

    Note For more information, click the following article number to view the article in the Microsoft Knowledge Base:

    975931

    (http://support.microsoft.com/kb/975931/ )

    Recommendations for antivirus exclusions that relate to Operations Manager

  • RAID level (0, 1, 5, 0+1 or 1+0) for the drive that is used by the Health Service State
  • Number of disks used for the RAID
  • Whether battery-backed write cache is enabled on the array controller

Troubleshooting SQL Server Performance

Operational Database (OperationsManager)

For the OperationsManager database, the most likely bottleneck is the disk array. If the disk array is not at maximum I/O capacity, the next most likely bottleneck is the CPU. The database will experience occasional slowdowns and operational “data storms” (very high incidences of events, alerts, and performance data or state changes that persist for a relatively long time). A short burst typically does not cause any significant delay for an extended period of time.

During operational data insertion, the database disks are primarily used for writes. CPU use is usually caused by SQL Server churn. This may occur when you have large and complex queries, heavy data insertion, and the grooming of large tables (which, by default, occurs at midnight). Typically, the grooming of even large Events and Performance Data tables does not consume excessive CPU or disk resources. However, the grooming pf the Alert and State Change tables can be CPU-intensive for large tables.

The database is also CPU-bound when it handles configuration redistribution bursts, which are caused by MP imports or by a large instance space change. In these cases, the Config service queries the database for new agent configuration. This ususally causes CPU spikes to occur on the database before the service sends the configuration updates to the agents.

Data Warehouse (OperationsManagerDW)

For the OperationsManagerDW database, the most likely bottleneck is the disk array. This usually occurs because of very large operational data insertions. In these cases, the disks are mostly busy performing writes. Usually, the disks are performing few reads, except to handle manually-generated Reporting views because these run queries on the data warehouse.

CPU usage is usually caused by SQL Server churn. CPU spikes may occur during heavy partitioning activity (when tables become very large and then get partitioned), the generation of complex reports, and large amounts of alerts in the database, with which the data warehouse must constantly sync up.

General troubleshooting

To troubleshoot the issue in this situation, collect the following information for each affected management server or gateway:

  • Exact Windows version, edition, and build number (for example, Windows Server 2003 Enterprise x64 SP2)
  • Number of processors
  • Amount of RAM
  • Amount of memory that is allocated to SQL Server
  • Whether SQL Server is 32-bit and is AWE enabled

    Note You can find most of this information in SQL Server Management Studio or in SQL Server Enterprise Manager. To do this, open the Properties window of the server, and then click the General and Memory tabs. The General tab includes the SQL Server version, the Windows version, the platform, the amount of RAM, and the number of processors. The Memory tab includes the memory that is allocated to SQL Server. In Microsoft SQL Server 2008 and in Microsoft SQL Server 2005, the Memory tab also includes the AWE option. To determine whether AWE is enabled in Microsoft SQL Server 2000, run the following command in the Microsoft SQL Query Analyzer:

    sp_configure ‘show advanced options’, 1
    RECONFIGURE
    GO
    sp_configure ‘awe enabled’

    The returned values for config_value and for run_value will be 1 if AWE is enabled.

    If OS is 32-bit and RAM is 4 GB or greater, check whether the /pae or /3gb switches exist in the Boot.ini. file. These options could be configured incorrectly if the server was originally installed by having 4 GB or less of RAM, and if the RAM was later upgraded.

    For 32-bit servers that have 4 GB of RAM, the /3gb switch in Boot.ini increases the amount of memory that SQL Server can address (from 2 to 3 GB). For 32-bit servers that have more than 4 GB of RAM, the /3gb switch in Boot.ini could actually limit the amount of memory that SQL Server can address. For these systems, add the /pae switch to Boot.ini, and then enable AWE in SQL Server.

    On a multi-processor system, check the Max Degree of Parallelism (MAXDOP) setting. In SQL Server 2008 and in SQL Server 2005, this option is on the Advanced tab in the Properties dialog box for the server. To determine this setting in SQL Server 2000, run the following command in SQL Query Analyzer:

    sp_configure ‘show advanced options’, 1
    RECONFIGURE
    GO
    sp_configure ‘max degree of parallelism’

    The default value is 0, which means that all available processors will be used. A setting of 0 is fine for servers that have eight or fewer processors. For servers that have more than eight processors, the time that it takes SQL Server to coordinate the use of all processors may be counterproductive. Therefore, for servers that have more than eight processors, you generally should set Max Degree of Parallelism to a value of 8. To do this, run the following command in SQL Query Analyzer:

    sp_configure ‘show advanced options’, 1
    GO
    RECONFIGURE WITH OVERRIDE
    GO
    sp_configure ‘max degree of parallelism’, 8
    GO
    RECONFIGURE WITH OVERRIDE
    GO

  • Drive letters that contain data warehouse or Ops and Tempdb files
  • Whether the antivirus software is configured to exclude SQL data and log files (Antivirus software cannot scan SQL database files. Trying to do this can degrade performance.)
  • Amount of free space on drives that contain data warehouse or Ops and Tempdb files
  • Storage type (SAN or local)
  • RAID level (0, 1, 5, 0+1 or 1+0) for drives that are used by SQL Server
  • If SAN storage us used: amount of spindles on each LUN that is used by SQL Server
  • In OpsMgr 2007 SP1: whether hotfix 969130 (data warehouse event grooming) or SP1 hotfix rollup 971541 is applied
  • If the converted Exchange 2007 managment pack is being used or has ever been used: amount of rows in the LocalizedText table in the Ops DB and in the EventPublisher table in the data warehouse database

    Note To determine the row amounts, run the following commands: 

    USE OperationsManager SELECT COUNT(*) FROM LocalizedText
    USE OperationsManagerDW SELECT COUNT(*) FROM EventPublisher

Counters to identify memory pressure

  • MSSQL$<instance>: Buffer Manager: Page Life expectancy – How long pages persist in the buffer pool. If this value is below 300 seconds, it may indicate that the server could use more memory. It could also result from index fragmentation.
  • MSSQL$<instance>: Buffer Manager: Lazy Writes/sec – Lazy writer frees space in the buffer by moving pages to disk. Generally, the value should not consistently exceed 20 writes per second. Ideally, it would be close to zero.
  • Memory: Available Mbytes – Values below 100 MB may indicate memory pressure. Memory pressure is clearly present when this amount is less than 10 MB.
  • Process: Private Bytes: _Total: This is the amount of memory (physical and page) being used by all processes combined.
  • Process: Working Set: _Total: This is the amount of physical memory being used by all processes combined. If the value for this counter is significantly below the value for Process: Private Bytes: _Total, it indicates that processes are paging too heavily. A difference of more than 10% is probably significant.

Counters to identify disk pressure

Capture these Physical Disk counters for all drives that contain SQL data or log files:

  • % Idle Time: How much disk idle time is being reported. Anything below 50 percent could indicate a disk bottleneck.
  • Avg. Disk Queue Length: This value should not exceed 2 times the number of spindles on a LUN. For example, if a LUN has 25 spindles, a value of 50 is acceptable. However, if a LUN has 10 spindles, a value of 25 is too high. You could use the following formulas based on the RAID level and number of disks in the RAID configuration:
    • RAID 0: All of the disks are doing work in a RAID 0 set
    • Average Disk Queue Length <= # (Disks in the array) *2
    • RAID 1: half the disks are “doing work”; therefore, only half of them can be counted toward Disks Queue
    • Average Disk Queue Length <= # (Disks in the array/2) *2
    • RAID 10: half the disks are “doing work”; therefore, only half of them can be counted toward Disks Queue
    • Average Disk Queue Length <= # (Disks in the array/2) *2
    • RAID 5: All of the disks are doing work in a RAID 5 set
    • Average Disk Queue Length <= # Disks in the array *2
    • Avg. Disk sec/Transfer: The number of seconds it takes to complete one disk I/O
    • Avg. Disk sec/Read: The average time, in seconds, of a read of data from the disk
    • Avg. Disk sec/Write: The average time, in seconds, of a write of data to the disk

      Note The last three counters in this list should consistently have values of approximately .020 (20 ms) or lower and should never exceed.050 (50 ms). The following are the thresholds that are documented in the SQL Server performance troubleshooting guide:

      • Less than 10 ms: very good
      • Between 10 – 20 ms: okay
      • Between 20 – 50 ms: slow, needs attention
      • Greater than 50 ms: serious I/O bottleneck
    • Disk Bytes/sec: The number of bytes being transferred to or from the disk per second
    • Disk Transfers/sec: The number of input and output operations per second (IOPS)

    When % Idle Time is low (10 percent or less), this means that the disk is fully utilized. In this case, the last two counters in this list (“Disk Bytes/sec” and “Disk Transfers/sec”) provide a good indication of the maximum throughput of the drive in bytes and in IOPS, respectively. The throughput of a SAN drive is highly variable, depending on the number of spindles, the speed of the drives, and the speed of the channel. The best bet is to check with the SAN vendor to find out how many bytes and IOPS the drive should support. If % Idle Time is low, and the values for these two counters do not meet the expected throughput of the drive, engage the SAN vendor to troubleshoot.

The following links provide deeper insight into troubleshooting SQL Server performance:

OpsMgr Performance counters

The following sections describe the performance counters that you can use to monitor and troubleshoot OpsMgr performance.

Gateway server role

  • Overall performance counters: These counters indicate the overall performance of the gateway:
    • Processor(_Total)\% Processor Time
    • Memory\% Committed Bytes In Use
    • Network Interface(*)\Bytes Total/sec
    • LogicalDisk(*)\% Idle Time
  • LogicalDisk(*)\Avg. Disk Queue LengthOpsMgr process generic performance counters: These counters indicate the overall performance of OpsMgr processes on the gateway:
    • Process(HealthService)\%Processor Time
    • Process(HealthService)\Private Bytes (depending on how many agents this gateway is managing, this number may vary and could be several hundred megabytes)
    • Process(HealthService)\Thread Count
    • Process(HealthService)\Virtual Bytes
    • Process(HealthService)\Working Set
    • Process(MonitoringHost*)\% Processor Time
    • Process(MonitoringHost*)\Private Bytes
    • Process(MonitoringHost*)\Thread Count
    • Process(MonitoringHost*)\Virtual Bytes
  • Process(MonitoringHost*)\Working SetOpsMgr specific performance counters: These counters are OpsMgr specific counters that indicate the performance of specific aspects of OpsMgr on the gateway:
    • Health Service\Workflow Count
    • Health Service Management Groups(*)\Active File Uploads: The number of file transfers that this gateway is handling. This represents the number of management pack files that are being uploaded to agents. If this value remains at a high level for a long time, and there is not much management pack importing at a given moment, these conditions may generate a problem that affects file transfer.
    • Health Service Management Groups(*)\Send Queue % Used: The size of persistent queue. If this value remains higher than 10 for a long time, and it does not drop, this indicates that the queue is backed up. This condition is cause by an overloaded OpsMgr system because the management server or database is too busy or is offline.
    • OpsMgr Connector\Bytes Received: The number of network bytes received by the gateway – i.e., the amount of incoming bytes before decompression.
    • OpsMgr Connector\Bytes Transmitted: The number network bytes sent by the gateway – i.e., the amount of outgoing bytes after compression.
    • OpsMgr Connector\Data Bytes Received: The number of data bytes received by the gateway – i.e., the amount of incoming data after decompression.
    • OpsMgr Connector\Data Bytes Transmitted: The number of data bytes sent by the gateway – i.e. the amount of outgoing data before compression.
    • OpsMgr Connector\Open Connections: The number of connections that are open on gateway. This number should be same as the number of agents or management servers that are directly connected to the gateway.

Management server role

Overall performance counters: These counters indicate the overall performance of the management server:

  • Processor(_Total)\% Processor Time
  • Memory\% Committed Bytes In Use
  • Network Interface(*)\Bytes Total/sec
  • LogicalDisk(*)\% Idle Time

LogicalDisk(*)\Avg. Disk Queue LengthOpsMgr process generic performance counters: These counters indicate the overall performance of OpsMgr processes on the management server:

  • Process(HealthService)\% Processor Time
  • Process(HealthService)\Private Bytes – Depending on how many agents this management server is managing, this number may vary, and it could be several hundred megabytes.
  • Process(HealthService)\Thread Count
  • Process(HealthService)\Virtual Bytes
  • Process(HealthService)\Working Set
  • Process(MonitoringHost*)\% Processor Time
  • Process(MonitoringHost*)\Private Bytes
  • Process(MonitoringHost*)\Thread Count
  • Process(MonitoringHost*)\Virtual Bytes

Process(MonitoringHost*)\Working SetOpsMgr specific performance counters: These counters are OpsMgr specific counters that indicate the performance of specifric aspects of OpsMgr on the management server:

  • Health Service\Workflow Count: The number of workflows that are running on this management server.
  • Health Service Management Groups(*)\Active File Uploads: The number of file transfers that this management server is handling. This represents the number of management pack files that are being uploaded to agents. If this value remains at a high level for a long time, and there is not much management pack importing at a given moment, these conditions may generate a problem that affects file transfer.
  • Health Service Management Groups(*)\Send Queue % Used: The size of the persistent queue. If this value remains higher than 10 for a long time, and it does not drop, this indicates that the queue is backed up. This condition is cause by an overloaded OpsMgr system because the OpsMgr system (for example, the root management server) is too busy or is offline.
  • Health Service Management Groups(*)\Bind Data Source Item Drop Rate: The number of data items that are dropped by the management server for database or data warehouse data collection write actions. When this counter value is not 0, the management server or database is overloaded because it can’t handle the incoming data item fast enough or because a data item burst is occurring. The dropped data items will be resent by agents. After the overload or burst situation is finished, these data items will be inserted into the database or into the data warehouse.
  • Health Service Management Groups(*)\Bind Data Source Item Incoming Rate: The number of data items received by the management server for database or data warehouse data collection write actions.
  • Health Service Management Groups(*)\Bind Data Source Item Post Rate: The number of data items that the management server wrote to the database or data warehouse for data collection write actions.
  • OpsMgr Connector\Bytes Received: The number of network bytes received by the management server – i.e., the size of incoming bytes before decompression.
  • OpsMgr Connector\Bytes Transmitted: The number of network bytes sent by the management server – i.e., the size of outgoing bytes after compression.
  • OpsMgr Connector\Data Bytes Received: The number of data bytes received by the management server – i.e., the size of incoming data after decompress)
  • OpsMgr Connector\Data Bytes Transmitted: The number of data bytes sent by the management server – i.e., the size of outgoing data before compression)
  • OpsMgr Connector\Open Connections: The number of connections open on management server. It should be same as the number of agents or root management server that are directly connected to it.
  • OpsMgr database Write Action Modules(*)\Avg. Batch Size: The number of a data items or batches that are eceived by database write action modules. If this number is 5,000, a data item burst is occurring.
  • OpsMgr DB Write Action Modules(*)\Avg. Processing Time: The number of seconds a database write action modules takes to insert a batch into database. If this number is often greater than 60, a database insertion performance issue is occurring.
  • OpsMgr DW Writer Module(*)\Avg. Batch Processing Time, ms: The number of milliseconds for data warehouse write action to insert a batch of data items into a data warehouse.
  • OpsMgr DW Writer Module(*)\Avg. Batch Size: The average number of data items or batches received by data warehouse write action modules.
  • OpsMgr DW Writer Module(*)\Batches/sec: The number of batches received by data warehouse write action modules per second.
  • OpsMgr DW Writer Module(*)\Data Items/sec: The number of data items received by data warehouse write action modules per second.
  • OpsMgr DW Writer Module(*)\Dropped Data Item Count: The number of data items dropped by data warehouse write action modules.
  • OpsMgr DW Writer Module(*)\Total Error Count: The number of errors that occurred in a data warehouse write action module.

Root management server role

Overall performance counters: These counters indicate the overall performance of the root management server:

  • Processor(_Total)\% Processor Time
  • Memory\% Committed Bytes In Use
  • Network Interface(*)\Bytes Total/sec
  • LogicalDisk(*)\% Idle Time

     

LogicalDisk(*)\Avg. Disk Queue LengthOpsMgr process generic performance counters: These counters indicate the overall performance of OpsMgr processes on the root management server:

  • Process(HealthService)\% Processor Time
  • Process(HealthService)\Private Bytes (Depending on how many agents this root management server is managing, this number may vary and could be several hundred Megabytes.)
  • Process(HealthService)\Thread Count
  • Process(HealthService)\Virtual Bytes
  • Process(HealthService)\Working Set
  • Process(MonitoringHost*)\% Processor Time
  • Process(MonitoringHost*)\Private Bytes
  • Process(MonitoringHost*)\Thread Count
  • Process(MonitoringHost*)\Virtual Bytes
  • Process(MonitoringHost*)\Working Set
  • Process(Microsoft.Mom.ConfigServiceHost)\% Processor Time
  • Process(Microsoft.Mom.ConfigServiceHost)\Private Bytes
  • Process(Microsoft.Mom.ConfigServiceHost)\Thread Count
  • Process(Microsoft.Mom.ConfigServiceHost)\Virtual Bytes
  • Process(Microsoft.Mom.ConfigServiceHost)\Working Set
  • Process(Microsoft.Mom.Sdk.ServiceHost)\% Processor Time
  • Process(Microsoft.Mom.Sdk.ServiceHost)\Private Bytes
  • Process(Microsoft.Mom.Sdk.ServiceHost)\Thread Count
  • Process(Microsoft.Mom.Sdk.ServiceHost)\Virtual Bytes

     

Process(Microsoft.Mom.Sdk.ServiceHost)\Working SetOpsMgr specific performance counters: These counters are OpsMgr specific counters that indicate the performance of specific aspects of OpsMgr on the root management server:

  • Health Service\Workflow Count: The number of workflows that are running on this root management server.
  • Health Service Management Groups(*)\Active File Uploads: The number of file transfers that this root management server is handling – i.e., configuration and management pack uploads to agents. If this value remains higher for a long time, and it does not drop, this indicates that not much discovery or management pack is being imported at the moment, and that there could be a problem in file transfer.
  • Health Service Management Groups(*)\Send Queue % Used: The size of the persistent queue.
  • Health Service Management Groups(*)\Bind Data Source Item Drop Rate: The number of data items dropped by the root management server for database or data warehouse data collection write actions. When this counter value is not 0, the root management server or database is overloaded because it can’t handle the incoming data item fast enough or because a data item burst is occurring. The dropped data items will be resent by agents. After the overloaded or burst situation is finished, these data items will be inserted into the database or into the data warehouse.
  • Health Service Management Groups(*)\Bind Data Source Item Incoming Rate: The number of data items received by the root management server for database or data warehouse data collection write actions.
  • Health Service Management Groups(*)\Bind Data Source Item Post Rate: The number of data items that the root management server wrote to the database or to the data warehouse for database or data warehouse data collection write actions.
  • OpsMgr Connector\Bytes Received: The number of network bytes received by the root management server – i.e., the size of incoming bytes before decompress.
  • OpsMgr Connector\Bytes Transmitted: The number of network bytes sent by the root management server – i.e., the size of outgoing bytes after compression.
  • OpsMgr Connector\Data Bytes Received: The number of data bytes received by the root management server – i.e., the size of incoming data after decompression.
  • OpsMgr Connector\Data Bytes Transmitted: The number of data bytes sent by the root management server – i.e., the size of outgoing data before compression.
  • OpsMgr Connector\Open Connections: The number of connections open on the root management server. It should be same as the number of agents or management servers that are directly connected to it.
  • OpsMgr Config Service\Number Of Active Requests: The number of configuration or management pack requests that are being processing by the Config service.
  • OpsMgr Config Service\Number Of Queued Requests: The number of queued config or management pack requests sent to the Config service. If it is high for a long time, the instance space or management pack space is changing too frequently.
  • OpsMgr SDK Service\Client Connections: The number of SDK connections.
  • OpsMgr DB Write Action Modules(*)\Avg. Batch Size: The number of a data items or batches that are received by database write action modules. If this number is 5,000, a data item burst is occurring.
  • OpsMgr DB Write Action Modules(*)\Avg. Processing Time: The number of seconds that a database write action modules takes to insert a batch into a database. If this number is often larger than 60, a database insertion performance issue is occurring.
  • OpsMgr DW Writer Module(*)\Avg. Batch Processing Time, ms: The number of milliseconds that it takes for a data warehouse write action to insert a batch of data items into a data warehouse.
  • OpsMgr DW Writer Module(*)\Avg. Batch Size: The average number of data items or batches that are received by data warehouse write action modules.
  • OpsMgr DW Writer Module(*)\Batches/sec: The number of batches received by data warehouse write action modules per second.
  • OpsMgr DW Writer Module(*)\Data Items/sec: The number of data items received by data warehouse write action modules per second.
  • OpsMgr DW Writer Module(*)\Dropped Data Item Count: The number of data items that are dropped by data warehouse write action modules)

OpsMgr DW Writer Module(*)\Total Error Count (This is number of errors happened in data warehouse write action modules.

Microsoft Exchange 2010 SP3 is released

Wednesday, February 13, 2013 Posted by
Microsoft  Microsoft Exchange Server 2010 Service Pack 3 (SP3)                                        

 Earlier last year, Microsoft announced that Exchange 2010 Service Pack 3 would be coming in the first half of 2013. Later, we updated the timeframe to Q1 2013. Today, we’re pleased to announce the availability of Exchange Server 2010 Service Pack 3, which is ready to download.

Service Pack 3 is a fully slipstreamed version of Exchange 2010. The following new features and capabilities are included within SP3:

  • Coexistence with Exchange 2013:Customers who want to introduce Exchange Server 2013 into their existing Exchange 2010 infrastructure will need the coexistence changes shipping in SP3.NOTE: Exchange 2010 SP3 allows Exchange 2010 servers to coexist with Exchange 2013 CU1, which is also scheduled to be released in Q1 2013. Customers can test and validate this update in a representative lab environment prior to rolling out in their production environments as an important coexistence preparatory step before introducing Exchange Server 2013 CU1.
  • Support for Windows Server 2012: With SP3, you can install and deploy Exchange Server 2010 on computers that are running Windows Server 2012.
  • Support for Internet Explorer 10: With SP3, you can use IE10 to connect to Exchange 2010.
  • Customer Requested Fixes: All fixes contained within update rollups released before SP3 will also be contained within SP3. Details of our regular Exchange 2010 release rhythm can be found in Exchange 2010 Servicing.

In addition to the customer reported issues resolved in previous rollups, this service pack also resolves the issues that are described in the following Microsoft Knowledge Base (KB) articles:

Note: Some of the following KB articles may not be available at the time of publishing this post.

2552121 You cannot synchronize a mailbox by using an Exchange ActiveSync device in an Exchange Server 2010 environment

2729444 Mailboxes are quarantined after you install the Exchange Server 2010 SP2 version of the Exchange Server 2010 Management Pack

2778100 Long delay in receiving email messages by using Outlook in an Exchange Server 2010 environment

2779351 SCOM alert when the Test-PowerShellConnectivity cmdlet is executed in an Exchange Server 2010 organization

2784569 Slow performance when you search a GAL by using an EAS device in an Exchange Server 2010 environment

2796950 Microsoft.Exchange.Monitoring.exe process consumes excessive CPU resources when a SCOM server monitors Exchange Server 2010 Client Access servers

2800133 W3wp.exe process consumes excessive CPU and memory resources on an Exchange Client Access server after you apply Update Rollup 5 version 2 for Exchange Server 2010 SP2

2800346 Outlook freezes and high network load occurs when you apply retention policies to a mailbox in a mixed Exchange Server 2010 SP2 environment

2810617 Can’t install Exchange Server 2010 SP3 when you define a Windows PowerShell script execution policy in Group Policy

2787500 Declined meeting request is added back to your calendar after a delegate opens the request by using Outlook 2010

2797529 Email message delivery is delayed on a Blackberry mobile device after you install Update Rollup 4 for Exchange Server 2010 SP2

2800080 ErrorServerBusy response code when you synchronize an EWS-based application to a mailbox in an Exchange Server 2010 environment

The New Exchange Reaches RTM! [Exchange 15]

Thursday, October 11, 2012 Posted by

Today Microsoft reached an important milestone in the development of the new Exchange.

Moments ago, the Exchange engineering team signed off on the Release to Manufacturing (RTM) build. This milestone means the coding and testing phase of the project is complete and we are now focused on releasing the new Exchange via multiple distribution channels to our business customers. General availability is planned for the first quarter of 2013.

Microsoft has a number of programs that provide business customers with early access so they can begin testing, piloting and adopting Exchange within their organizations:

  • Microsoft will begin rolling out new capabilities to Office 365 Enterprise customers in our next service updates, starting in November through general availability.
  • Volume Licensing customers with Software Assurance will be able to download Exchange Server 2013 through the Volume Licensing Service Center by mid-November. These products will be available on the Volume Licensing price list on December 1.

Since announcing the Preview of the new Exchange back in July, the EHLO team has been actively blogging about the features and capabilities of the new Exchange. Microsoft is excited to start getting the finished product into the hands of our customers!

For those who are interested in learning more about the new Exchange, check out the series of posts that have been published over the past couple months:

Update Rollup 2 for Exchange 2010 Service Pack 2 (KB2661854)

Monday, April 16, 2012 Posted by
Date Published: 16/04/2012

Microsoft has released the following update rollup 2 for Exchange Server 2010 SP2:

This update contains a number of customer-reported and internally found issues since the release of SP1 RU2. See KB2661854: Description of Update Rollup 2 for Exchange Server 2010 Service Pack 2 for more details.

Note: Some of the following KB articles may not be available at the time of publishing this post.

We would like to specifically call out the following fixes which are included in this release:

  • KB2696913 You cannot log on to Outlook Web App when a proxy is set up in an Exchange Server 2010 environment
  • KB2688667 High CPU in W3WP when processing recurrence items who fall on DST cutover
  • KB2592398 PR_INTERNET_MESSAGE_ID is the same on messages resent by Outlook
  • KB2630808 EwsAllowMacOutlook Setting Not Honored
  • KB2661277 Android/Iphones stuck with 451 during Cross forest proxy in datacenter
  • KB2678414  Contact name doesn’t display company if name fields are left blank

Note that this fix will not cause the CAS to CAS OWA proxying incompatibility with Exchange 2007 as discussed here. No additional updates are required on Exchange 2007 for proxying to work once Exchange 2010 SP2 RU2 is installed.

General Notes:

For DST Changes: http://www.microsoft.com/time.

Note for Forefront Protection for Exchange users  For those of you running Forefront Protection for Exchange, be sure you perform these important steps from the command line in the Forefront directory before and after this rollup’s installation process. Without these steps, Exchange services for Information Store and Transport will not start after you apply this update. Before installing the update, disable ForeFront by using this command: fscutility /disable. After installing the update, re-enable ForeFront by running fscutility /enable.

Update Rollup 6 for Exchange Server 2010 SP1 (KB2608646)

Monday, November 7, 2011 Posted by
Date Published: 28/10/2011

Microsoft has released the following update rollup for Exchange Server 2010 SP1:

Update Rollup 6 for Exchange Server 2010 SP1 (KB2608646) Download the rollup here.

This update contains a number of customer-reported and internally found issues since the release of SP1. See ‘KB 2608646: Description of Update Rollup 6 for Exchange Server 2010 Service Pack 1’ for more details.

This update contains a number of customer reported and internally found issues since the release of RU5. In particular we would like to specifically call out the following fixes which are included in this release:

  • 2627769 Some time zones in OWA are not synchronized with Windows in an Exchange Server 2010 environment
  • 2528854 The Microsoft Exchange Mailbox Replication service crashes on a computer that has Exchange Server 2010 SP1 installed
  • 2544246 You receive a NRN of a meeting request 120 days later after the recipient accepted the request in an Exchange Server 2010 SP1 environment
  • 2616127 “0x80041606” error code when you use Outlook in online mode to search for a keyword against a mailbox in an Exchange Server 2010 environment.
  • 2549183 “There are no objects to select” message when you try to use the EMC to specify a server to connect to in an Exchange Server 2010 SP1 environment

 

Availability of this update on Microsoft Update is planned for late November.

General Notes

An issue with management of RBAC roles when RU6 is partially deployed in the organization: Due to changes shipped in this update, certain warnings can be displayed when managing RBAC roles, if RU6 is not yet deployed to all servers in the organization. Please see the following KB article for more information:

Managing RBAC roles might display warnings or errors if Exchange 2010 SP1 RU6 is partially deployed in the organization
http://support.microsoft.com/kb/2638351

!Note for Forefront users:

For those of you running Forefront Protection for Exchange, before installing the update, stop all Forefront services.