Q. How many FC ports (Maximum) can be assigned to LPARs in a single NPIV card?

A. A single NPIV adapter has two ports. Each port can support up to 64 Virtual Fibre Channel adapters. That means using both ports, there's support for 128 VFCs. Two VFCs from a single client must go through separate server VFCs and thus through different physical Fibre Channel ports. Multi-pathing therefore drives up the number of physical adapters required to support larger numbers of clients.

Q. How many world wide port names can be generated in a single adapter?

A. Each port comes with 2048 world wide port names, or 4096 per adapter. If all world wide port names are exhausted, more can be ordered through IBM.

Q. Since POWER8 should be announced soon, will there be any limitations with LPM from POWER6 to POWER8?

A. We can't comment on unannounced products. The requirement to migrate from older architectures to the latest is well known. LPM is a core function of PowerVM.

Q. Is is possible to keep the same partition ID in a migration?

A. The partition ID number can be maintained only if that number is currently unassigned on the destination server. If the mobile partition was created initially with a number that was in a predetermined range, and all ids were strictly controlled across all potential servers, you could retain the same ID number. The same is true for slot numbers. Very rarely is this amount of control exerted. It doesn't really matter what the partition ID number is.

Q. To what extent may dedicated adapter cards be used?

A. If you have a dedicated adapter, it must be removed via DLPAR before a migration can begin. That means if you have a mixture of virtual and dedicated, you would have to be able to keep running while removing the dedicated adapter . In other words, the I/O would have to be virtualized, at least temporarily. On the destination server, assuming you have a spare physical adapter you could add back in, you could DLPAR it into the configuration.

Q. I had a dual VIO NPIV configuration. When I do a migration, it does not keep the same mappings of the virtual-to-real adapters. Is there a way to make this work, or do you have to run vfcmap after the migration to fix it?

A. There are two issues here. If you are concerned about maintaining the same Ids, that should not really be necessary and it should not matter. Assuming you are talking about not maintaining redundancy across two switch fabrics, there are two ways to address this. One is to be sure you match client VFC adapters to the correct server adapters that go to the correct fabrics. You may need to monitor this from the graphical interface. Second, if you are using the command line, migrlpar, you can use the –mpio flag with an attribute of 1 (--mpio 1) to force the LPM to duplicate MPIO settings on the destination server. As long as they are going to ports on different switches, it does not matter what the fcsN is.

Q. How do you map standby wwpns? The SAN administrator only sees the active wwpns.

A. The solution is to use the chnportlogin command. With this command, you can login all wwpns to the switch so they are visible. You can even run this command when the partition is not active. In that situation, the adapters will show that they are logged into the VIOS, from where they should be visible to the SAN administrator.

Q. If I have multiple network adapters, including a 10 Gbps, that connect the two VIO servers, will the LPM benefit, even if the clients are not on the 10 Gbps network?

A. Yes. The two VIO servers will open a fully duplexed channel between them on the fastest network they can find. That will be the way active memory data is moved from the source to the destination server.

Q. What does LPM do after it reaches 100 percent complete?

A. Cleanup work is done on both the source and target sides. The presentation describes this.

Q. Can you explain a little bit more about POWER5, POWER6 and POWER7 compatibility modes?

A. Migration is only supported on POWER6 and POWER7 hardware. If one server is a POWER6 and the other is a POWER7, you are required to create the LPAR in compatibility mode, which provides a common denominator of function between the two architectures. You will lose some of the additional functionality from the POWER7 or POWER7+, e.g., number of threads, but you will have the flexibility to move the partition between servers.

Q. Is there any plan to allow slot numbers to be specified during LPM?

A. You can use the migrlpar command and specify the source and target virtual fibre channel adapter slot numbers. Remember, though, that you cannot use a slot number that is already in use on the target side. Refer to the migrlpar command line syntax.

Q. Is the application running on the source partition still available during the migration?

A. Yes, the application(s) running on the source will stay active and users may never notice anything. There will be a brief (few second) suspension of threads toward the end of the migration, but network connections will not be lost, and all in-flight data will be verified. Except for that slight pause, which you may or may not see, there's no indication that anything has happened.

Q. In NPIV, will the mapping of the VFC move exactly from the source to the destination server?

A. The client slot assignments will remain unchanged for the VFC, but the server pairings may be different. Slot numbers are not guaranteed to carry over, but they will be if possible. If you are using MPIO, you are responsible for making sure you select the VIO server and slots on the destination that match the source. In the command line, the –mpio flag with a value of 1 requires strict duplication of multi-pathing on the target.

Q. Can LPM be done on vSCSI?

A. Yes, it can. The presentation describes some of the differences between setting up vSCSI and NPIV.

Q. With all of the feature/platform requirements, does validation check these or do you have to be responsible?

A. Validation will check everything and give you errors or warnings as needed. Errors will stop a migration, but warnings will not.

Q. Is there a command that can be run on the client AIX LPAR that will show the vhost names from the VIO server?

A. No, but you can run either of these commands to see the mapping on the VIO server: lsmap -all -npiv for VFC mapping, and lsmap -all for vSCSI mapping.

Q. Can LPM cross different HMC management domains?

A. An LPM operation can either be under the control of an HMC that manages both the source and destination servers, or you can do a remote LPM by triggering the migration from a remote HMC that manages the target server. For remote migration, you have to configure SSH between the two HMCs. Even when two HMCs are involved, you must have the same subnet in use for clients on both sides of the equation. That does not necessarily mean the VIO servers have to be on that subnet, but the migrating clients must if the migration is to be live. Otherwise, network connectivity would be lost. Two VIO servers must have access to a common network between them, ideally one with maximum bandwidth.

Q. If the entire frame goes down, is there a way partition mobility can bring up a second frame that was planned for partition mobility?

A. LPM does not work on a dead system or partition. The solution you're referring to is Remote Restart, which enables the recovery of a partition after a system outage. There's a PRPQ that is currently available, and a fully supported version involving IBM Systems Director and VMControl is expected soon.

Q. What happens if you migrate from a system with one VIOS to one with two? Will the migration automatically use the second VIOS?

A. No. LPM tries to maintain the exact same configuration on the target as on the source. However, once you are on a system with dual VIOS, you could reconfigure the partition to take advantage of that fact. That could, of course, impair your ability to move back to a single-VIOS system.

Q. For licensing situations, does LPM support moving a partition to a user-defined shared processor pool that is capped to limit processor usage and therefore cost?

A. Yes, you can create shared processor pools on both source and target and migrate between them, maintaining a cap on entitlement and therefore licensing cost.

Q. Can you run LPM validation from IBM Systems Director without actually doing the migration?

A. IBM Systems Director and VMControl have a Relocation task that is essentially the same as LPM, although you do not have the same granularity of control as you do on the HMC.

Q. Is there a way to track LPM history on a partition?

A. Yes, the AIX error log retains a record of migrations. Information about the migrated partition is not kept on the source server. It's as if it never existed there. Some information is obtainable in troubleshooting situations from the source HMC. This would involve working with Support.

Q. Why are virtual adapters on a client listed as desired instead of required? Does this apply to NPIV?

A. If they were required, you would not be able to migrate them from one server to another. VIOS adapters should also be listed as desired, with the exception of the two serial adapters. Otherwise, the desired mode applies to all virtual adapters in the client.