Sunday, July 18, 2010

ISDN PRI back-to-back

To emulate a PSTN router, we need a back to back E1 connection and the obviously the first step would be the cable connecting both routers. Here, the idea is to connect HQ Router with PSTN router and therefore the crossover cable needs to be made.

Following should be the PIN Configuration; RJ-48 Connector to RJ-48 Connector (Crossover) Pinout:


1 RX Ring - -> 4 TX Ring -
2 RX Tip + -> 5 TX Tip +
4 TX Ring - -> 1 RX Ring -
5 TX Tip + -> 2 RX Tip +


When connected, both 1MFT-E1 cards have immediatly turned on the "CD" Carrier Detect light,

The basic configuration to emulate a PSTN Pri will be:

PSTN_RTR#

network-clock-participate wic 0

controller E1 0/0/0
clock source internal ---- the "pstn network" side must provide clock
pri-group timeslots 1-31

interface Serial0/0/0:15

no ip address

encapsulation hdlc

isdn switch-type primary-net5

isdn protocol-emulate network ---- this is the PSTN-emulated side ‘Service Provider’

isdn incoming-voice voice

no cdp enable


HQ_RTR#

network-clock-participate wic 0


controller E1 1/0/0
pri-group timeslots 1-31


interface Serial1/0/0:15
no ip address
encapsulation hdlc
isdn switch-type primary-net5
isdn incoming-voice voice
no cdp enable
!


Reference:
http://www.juniper.net/techpubs/hardware/m40/m40-hwguide/html/pinout4.html

http://www.techexams.net/forums/ccvp/31284-back-back-pri-am-i-missing-something.html

http://ccvp.org/modules/newbb/viewtopic.php?topic_id=63&forum=19

http://rizzitech.blogspot.com/2009/02/wvic-1mft-e1-back-to-back-connection.html

https://supportforums.cisco.com/message/3135483

Default DHCP lease and CUCM 7.0 DHCP rebinding


Default DHCP Lease
Configuring the Address Lease Time
By default, each IP address assigned by a DHCP server comes with a one-day lease, which is the amount of time that the address is valid. To change the lease value for an IP address, use the following command in DHCP pool configuration mode:
Command
Purpose
Router(config-dhcp)# lease {days [hours][minutes] | infinite}
Specifies the duration of the lease. The default is a a one-day lease.
http://cisco.biz/en/US/docs/ios/12_0t/12_0t1/feature/guide/Easyip2.html#wp22915

CUCM 7.0 DHCP rebinding
Rebinding Timer (T2) Expires

If the client receives no reply from the server, it will remain in the RENEWING state, and will regularly retransmit the unicast DHCPREQUEST to the server. During this period of time, the client is still operating normally, from the perspective of its user. If no response from the server is received, eventually the rebinding timer (T2) expires. This will cause the client to transition to the REBINDING state, the T2 timer is set to 87.5% (7/8ths) (as per cisco’s recommendation it should be 75% of the lease time) of the length of the lease.
Client Sends DHCPREQUEST Rebinding Message

Having received no response from the server that initially granted the lease, the client “gives up” on that server and tries to contact any server that may be able to extend its existing lease. It creates a DHCPREQUEST message and puts its IP address in the CIAddr field, indicating clearly that it presently owns that address. It then broadcasts the request on the local network.

http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/admin/7_1_2/ccmcfg/b02dhsrv.html



Friday, May 28, 2010

Writing a network report

Excellent Reource on the subject,

Top Down Network Design by Priscilla Oppenheimer
http://www.topdownbook.com/


John Lockie
Here9s my advice
1. Monitor: use SNMP to gather A) interface bandwidth, B) CPU, and C)
interface errors. Check the manufacturer for MIB9s to do this. If setting
up SNMP is intimidating to you, contact Logic Monitor
http://www.logicmonitor.com who I happen to know, but I am sure there are
others that do this. Use 14 day trial if you don9t have funds or authority
to authorize.
2. Document: document your findings...how hard is this? When you see an
interface is at 90% utilization state it plainly, and then provide a
solution such as LACP. Include a task cost dollar value (and include buffer
room on the cost, don9t forget to consider labor, cabling, equipment,
warranty, SmartNET, etc.). It may be that management is not even asking you
for costs, in which case your job is 10x easier.
3. Summarize: reports that will go in front of executive eyeballs needs to
have 3executive summary2 (hence the term). Their time is valuable, so cut
to the chase on page 1 and leave the rest of the reading for the nerdy or
even 3doubting2 executives :)

That9s all you probably need to include. If you want to go crazy (like if
your switch network is going to cost you a million to upgrade) then it might
be wise to really dive in to vendor technologies, the differences between HP
and Cisco at L2 level, etc. Go nuts if you decide to do this, the more the
better - just don9t forget your executive summary, because some guys could
care less.

Some other advice I can give you, as a manager who reports to executives....
1. Keep yourself out of the equation, think for the business. By thinking
for the business interest you are in the long run thinking for your own.
2. Ask management the same question you asked us. 3What do you expect to
see included in this report2. You may be surprised. Every time I am given
a directive from executives I ask them, 3what you do expect to see2. There
is no shame in asking, and it9s actually dangerous not to ask. Sometimes
they want only a price, and other times they want the entire enchilada. My
points above assumed somewhere between those two....

One other thing....since you said 4500 series. Why not stackable 3700
series? :). Be careful here....while you are comparing old to new, you need
to know why you would do 1 new over another new. For example, it might be
obvious you need to upgrade switched network. But is it obvious why you
pick Catalyst over Procurve? A discriminating executive who knows even a
little (or lives next door to a VP for Procurve division!) could really
challenge you on this one. Here is a tip, simply look closely at things
like ISL over 802.1Q, and you may find that arguing for Cisco protocols is a
little more justified.

Good luck,
John

Thursday, May 27, 2010

COR List and Translation rules

In this lab we tried to use two functions translation rules and COR list
translations being used to transform the number from PSTN to local extention and COR was used to restrict the caller whom they can call or whom they cannot.

First of all we planned our dial plan, as you know we have been using two routers one is CME which have one sip and sccp phone and other router name CME-SIP that does have two SIP phones

Here, we just used one SIP Phone 4001 @ CME router and one SIP Phone 6001 @ CME-SIP router, like that;

4001--------------CME----------------CME-SIP----------------6001


below is the COR config that we used in this scenario

The below configuration for COR has been done on the CME side as needed

1) Defining cor list members
dial-peer cor custom
name local_KAR
name LD_LHR
name LongDist
name international


2) -------- Outgoing Corlists
dial-peer cor list KAR
member local_KAR

dial-peer cor list LHR
member LD_LHR

dial-peer cor list LD
member LongDist

dial-peer cor list INT
member international

3)----- incoming corlists

dial-peer cor list LongD
member local_KAR
member LD_LHR

dial-peer cor list Local
member local_KAR


4) applying outgoing corlist to dial peers


dial-peer voice 9042 voip
corlist outgoing LHR
destination-pattern 9042[39].......
session protocol sipv2
session target ipv4:172.16.1.2
dtmf-relay rtp-nte
codec g711ulaw

5) applying incoming corlist to ip phones


voice register pool 1
corlist incoming LongD

we also tried with
voice register pool 1
corlist incoming Local


When a caller dials 9042........ number it gets transformed to 042........ number basically eliminates the '9' digit so here we were emulating a PSTN call using the below translation rule and profile @ CME router

voice translation-rule 30
rule 1 /^9\(042[39].......\)/ /\1/
rule 2 /^9\([39].......\)/ /\1/

voice translation-profile lhr
translate called 30 ---- here translating a called number i.e DNIS

applying translation profile to dial-peer

dial-peer voice 9042 voip
translation-profile outgoing lhr
destination-pattern 9042[39].......
session protocol sipv2
session target ipv4:172.16.1.2
dtmf-relay rtp-nte
codec g711ulaw
corlist outgoing LHR


Now when digit passes to CME-SIP router provided the SIP Phone 4001 has the authorized cor list to continue. So when it recieves number starting from 042 it basically matches the following dial-peer;

dial-peer voice 42 voip
translation-profile incoming local_Profile
session protocol sipv2
incoming called-number 042........
dtmf-relay rtp-nte
codec g711ulaw

here u noticed we have used the translation profile so that once it matches the incoming called number it basically transforms its number to local number i.e 6001. Lets see how

voice translation-rule 500
rule 1 /^[39]......./ /5001/
rule 2 /^042[39]......./ /6001/

voice translation-profile local_Profile
translate called 500

So following the rule it matches rule 2 and rings the 6001 phone. Simple isn't it ;)

@ Cme side we tested with two incoming cor list to 4001 to test the function first we apply
voice register pool 1
corlist incoming LongD -- that basically through the call

we also tried with;

voice register pool 1
corlist incoming Local -- that basically blocked the call



HTH

Regards

Sunday, May 2, 2010

Configuring SIP Gw and H.323 Gw

That was bit tough to actually make it happen.. Well i will just elaborate first what here we exactly wanted to test.

The task was to call from SIP to SCCP Phone and vice versa

Well actually i was testing a scenario in my lab, where i was using two cme routers. One configured as SIP gateway and other as an H323 gateway.

On the H323 router i only have configured SCCP phones but on other one just the SIP phone using 3cx. exten of SIP phone was 5001 and for SCCP it was 3001

I initially could calls from SIP phone to SCCP but cant make from SCCP to SIP.

Even though respected voip dial-peers were created and voip service voip command configured with sip to h323 and vice versa but that didn't give me a joy....


Lets look what actually i did

--On h323 gateway;

voice service voip
allow-connections h323 to sip
allow-connections sip to h323

!
ephone-dn 3
number 3001
label first_3001

!

ephone 3
device-security-mode none
mac-address 0200.4C4F.4F52
type CIPC
button 1:3

!
dial-peer voice 50 voip
destination-pattern 5...
session target ipv4:172.16.1.2
dtmf-relay rtp-nte

----On SIP gateway;

voice service voip
allow-connections h323 to sip
allow-connections sip to h323
allow-connections sip to sip
sip
registrar server
!

!
voice register dn 1
number 5001
allow watch
name 3cx
!
voice register pool 1
id mac 0200.4C4F.4F54
number 1 dn 1
username test password test
codec g711ulaw

!

dial-peer voice 30 voip
destination-pattern 3...
session target ipv4:172.16.1.1


After spending alot of time digging into incoming and outgoing dial-peers..how they actually works ...I finally got it all working...yeaaah.

I actually was missing the right codecs Now the below configs works just perfect . Now i do have calls from both ends.

SIP

dial-peer voice 30 voip ----- thats an outgoing dial peer
destination-pattern 3...
session target ipv4:172.16.1.1
codec g711ulaw

SIP# sh voice register pool 1 --- output

dial-peer voice 40001 voip --- the implicit dial-peer created for incoming calls
destination-pattern 5001
session target ipv4:192.168.2.10:58855
session protocol sipv2
codec g711ulaw bytes 160

H323

dial-peer voice 50 voip ----- thats an outgoing dial peer
destination-pattern 5...
session protocol sipv2
session target ipv4:172.16.1.2
dtmf-relay rtp-nte
codec g711ulaw








Also there is the output of the 'debug voice dialpeer all' that helped me alot


---------------AT SIP GW---------


Mar 1 00:31:05.599: //-1/xxxxxxxxxxxx/DPM/dpMatchPeersCore:
Calling Number=, Called Number=5001, Peer Info Type=DIALPEER_INFO_SPEECH
*Mar 1 00:31:05.599: //-1/xxxxxxxxxxxx/DPM/dpMatchPeersCore:
Match Rule=DP_MATCH_DEST; Called Number=5001
*Mar 1 00:31:05.603: //-1/xxxxxxxxxxxx/DPM/dpMatchCore:
Dial String=5001, Expanded String=5001, Calling Number=
Timeout=TRUE, Is Incoming=FALSE, Peer Info Type=DIALPEER_INFO_SPEECH
*Mar 1 00:31:05.611: //-1/xxxxxxxxxxxx/DPM/MatchNextPeer:
Result=Success(0); Outgoing Dial-peer=40001 Is Matched
*Mar 1 00:31:05.619: //-1/xxxxxxxxxxxx/DPM/dpMatchPeersCore:
Result=Success(0) after DP_MATCH_DEST
*Mar 1 00:31:05.619: //-1/xxxxxxxxxxxx/DPM/dpMatchPeers:
Result=SUCCESS(0)
List of Matched Outgoing Dial-peer(s):
1: Dial-peer Tag=40001
















---------------AT H323 GW---------


Dial String=5001, Expanded String=5001, Calling Number=
Timeout=TRUE, Is Incoming=FALSE, Peer Info Type=DIALPEER_INFO_SPEECH
*Mar 1 00:31:09.251: //-1/xxxxxxxxxxxx/DPM/MatchNextPeer:
Result=Success(0); Outgoing Dial-peer=50 Is Matched
*Mar 1 00:31:09.255: //-1/xxxxxxxxxxxx/DPM/dpMatchPeersCore:
Result=Success(0) after DP_MATCH_DEST
*Mar 1 00:31:09.259: //-1/xxxxxxxxxxxx/DPM/dpMatchPeers:
Result=SUCCESS(0)
List of Matched Outgoing Dial-peer(s):
1: Dial-peer Tag=50
*Mar 1 00:31:09.263: //13/6C99EB548014/CCAPI/ccCallFeature:
Feature Type=25, Call Id=13










Reference

https://www.myciscocommunity.com/servlet/JiveServlet/previewBody/1765-102-2-2583/UC500-CCA-First-Look-v1.3-Lab8B.pdf

http://www.ciscopress.com/articles/article.asp?p=664148&seqNum=6

http://www.cisco.com/en/US/tech/tk652/tk90/technologies_tech_note09186a008010fed1.shtml

https://supportforums.cisco.com/thread/136551


HTH

Monday, April 19, 2010

Configuring B-ACD services

Configuring B-ACD services was one of the task that we wanted to look deep into it and for that we read the following comprehensive guide which was a key to got it configured successfully.

Reference:
http://cisco.biz/en/US/docs/voice_ip_comm/cucme/bacd/configuration/guide/40bacd.html
http://cciev.wordpress.com/2006/05/29/cme-b-acd/
http://www.voiceie.com/cgi-bin/ultimatebb.cgi?ubb=get_topic;f=8;t=000692

Here are the steps which were followed;

-- Creating ephone-hunt groups --

ephone-hunt 15 longest-idle
pilot 2000
list 2001, 2002
timeout 10, 10
!
!
!
!
ephone-hunt 16 sequential
pilot 3000
list 3001
timeout 20


-- Configuring AA scripts --
I had to download the whole b-acd-2.1.2.2.tar file to my flash first which consist of the following two tcl scipts
app-b-acd-2.1.2.2.tcl
app-b-acd-aa-2.1.2.2.tcl

through config terminal some mandatory parameters being defined as follows

application
service queue flash:app-b-acd-2.1.2.2.tcl --- named the service name as queue
param number-of-hunt-grps 2 --- we are using 2 hunt groups as earlier defined
param aa-hunt1 2000
param aa-hunt2 3000
param queue-len 15
param queue-manager-debugs 1 --- used with debug command to trace the script

service aa flash:app-b-acd-aa-2.1.2.2.tcl --- named the service name as aa
paramspace english index 1
paramspace english language en
paramspace english location flash:
param service-name queue --- refrencing the service name queue defined earlier
param handoff-string aa
param aa-pilot 1000 --- using pilot number 1000 for which we will define
the dial-peer accordingly
param welcome-prompt _bacd_welcome.au
param number-of-hunt-grps 2
param dial-by-extension-option 3 --- defining the dial by extention option as 3 explicitly

param max-extension-length 4
param second-greeting-time 30
param call-retry-timer 15
param max-time-call-retry 100
param max-time-vm-retry 2
param voice-mail 1002

-- Creating a VOIP dial-peer --

When someone dials the number 1000 it would initiate this dial-peer and the aa service will get activated. the 20.0.0.1 is the loopback address defined for CME

dial-peer voice 1000 voip
service aa
destination-pattern 1000
session target ipv4:20.0.0.1
incoming called-number 1000
dtmf-relay h245-alphanumeric
codec g711ulaw
no vad

-- Now Some diagnostic commands --

CME#call application voice load aa --- loading aa service ( you will need to load this service if you have changed the pre=defined parameters or have added other parameters)

CME#
*Mar 1 00:25:08.519: //-1//HIFS:/hifs_ifs_cb: hifs ifs file read succeeded. size=35485, url=flash:app-b-acd-aa-2.1.2.2.tcl
*Mar 1 00:25:08.539: //-1//HIFS:/hifs_free_idata: hifs_free_idata: 0x67DAA108
*Mar 1 00:25:08.539: //-1//HIFS:/hifs_hold_idata: hifs_hold_idata: 0x67DAA108
*Mar 1 00:25:08.739: //-1//TCL :EE66B155AC000:/tcl_PutsObjCmd: TCL AA: -- Valid mandatory parameter second-greeting-time = 30 --
*Mar 1 00:25:08.787: //-1//TCL :EE66B155AC000:/tcl_PutsObjCmd: TCL AA: -- Valid mandatory parameter call-retry-timer = 15 --
*Mar 1 00:25:08.855: //-1//TCL :EE66B155AC000:/tcl_PutsObjCmd: TCL AA: -- Valid mandatory parameter max-time-call-retry = 100 --
*Mar 1 00:25:08.895: //-1//TCL :EE66B155AC000:/tcl_PutsObjCmd: TCL AA: -- Valid mandatory parameter max-time-vm-retry = 2 --
*Mar 1 00:25:08.951: //-1//TCL :EE66B155AC000:/tcl_PutsObjCmd: TCL AA: -- Valid Mandatory parameter number-of-hunt-grps = 2 --

CME#call application voice load queue --- loading queue service

CME#
*Mar 1 00:25:43.759: //-1//HIFS:/hifs_ifs_cb: hifs ifs file read succeeded. size=24985, url=flash:app-b-acd-2.1.2.2.tcl
*Mar 1 00:25:43.767: //-1//HIFS:/hifs_free_idata: hifs_free_idata: 0x67DAA198
*Mar 1 00:25:43.771: //-1//HIFS:/hifs_hold_idata: hifs_hold_idata: 0x67DAA198
*Mar 1 00:25:44.031: //-1//TCL :EE66B15694000:/tcl_PutsObjCmd: TCL B-ACD: -- Valid optional parameter queue-manager-debugs = 1 --
*Mar 1 00:25:44.071: //-1//TCL :EE66B15694000:/tcl_PutsObjCmd: TCL B-ACD: -- Valid Mandatory parameter queue-len = 15 --
*Mar 1 00:25:44.147: //-1//TCL :EE66B15694000:/tcl_PutsObjCmd: TCL B-ACD: -- Valid Mandatory parameter number-of-hunt-grps = 2 --

CME#sh call application sessions -- nothing showed up as we havn't initiated any call to pilot number yet

CME#debug voice application script -- you can see if you getting hits on the application

CME#csim start 1000 --- testing pilot number of aa 1000

csim: called number = 1000, loop count = 1 ping count = 0

*Mar 1 00:26:21.647: //-1//TCL :EE66B1577C000:/tcl_PutsObjCmd: TCL AA: -- Valid mandatory parameter second-greeting-time = 30 --
*Mar 1 00:26:21.711: //-1//TCL :EE66B1577C000:/tcl_PutsObjCmd: TCL AA: -- Valid mandatory parameter call-retry-timer = 15 --
*Mar 1 00:26:21.747: //-1//TCL :EE66B1577C000:/tcl_PutsObjCmd: TCL AA: -- Valid mandatory parameter max-time-call-retry = 100 --
*Mar 1 00:26:21.795: //-1//TCL :EE66B1577C000:/tcl_PutsObjCmd: TCL AA: -- Valid mandatory parameter max-time-vm-retry = 2 --
*Mar 1 00:26:21.835: //-1//TCL :EE66B1577C000:/tcl_PutsObjCmd: TCL AA: -- Valid Mandatory parameter number-of-hunt-grps = 2 --
*Mar 1 00:26:21.983: //6//TCL :/tcl_PutsObjCmd:
proc init_perCallvars
*Mar 1 00:26:21.987:
*Mar 1 00:26:22.055: //6//TCL :/tcl_PutsObjCmd: TCL AA: +++ B-ACD-SERVICE not registered, Starting B-ACD-SERVICE +++
*Mar 1 00:26:22.519: //-1//TCL :EE66B15864000:/tcl_PutsObjCmd: TCL B-ACD: -- Valid optional parameter queue-manager-debugs = 1 --
*Mar 1 00:26:22.551: //-1//TCL :EE66B15864000:/tcl_PutsObjCmd: TCL B-ACD: -- Valid Mandatory parameter queue-len = 15 --
*Mar 1 00:26:22.591: //-1//TCL :EE66B15864000:/tcl_PutsObjCmd: TCL B-ACD: -- Valid Mandatory parameter number-of-hunt-grps = 2 --
*Mar 1 00:26:22.863: %IVR-6-APP_INFO: TCL B-ACD: >>> B-ACD Service Started <<< *Mar 1 00:26:22.871: //6//TCL :/tcl_PutsObjCmd: TCL B-ACD: >>> B-ACD Service Started <<< *Mar 1 00:26:22.907: //6//TCL :/tcl_PutsObjCmd: TCL B-ACD: >>> Handoff String = aa <<< *Mar 1 00:26:22.975: //6//TCL :/tcl_PutsObjCmd: proc init_perCallvars *Mar 1 00:26:22.979: *Mar 1 00:26:23.175: //6//TCL :/tcl_PutsObjCmd: TCL B-ACD: >>> Stat collection disabled for queue 2000 <<< *Mar 1 00:26:23.227: //6//TCL :/tcl_PutsObjCmd: TCL B-ACD: ++ Message received from IOS ++ *Mar 1 00:26:23.255: //6//TCL :/tcl_PutsObjCmd: TCL AA: ++ Playing Welcome Prompt and options menu ++ *Mar 1 00:26:26.363: //6//TCL :/tcl_PutsObjCmd: TCL B-ACD: ++ Message received from IOS ++ *Mar 1 00:26:26.435: //6//TCL :/tcl_PutsObjCmd: TCL B-ACD: >>> Stat collection disabled for queue 3000 <<<
*Mar 1 00:26:26.443: //6//TCL :/tcl_PutsObjCmd: TCL B-ACD: ++ Message received from IOS ++
*Mar 1 00:26:26.495: //6//TCL :/tcl_PutsObjCmd: TCL B-ACD: ++ Message received from IOS ++
*Mar 1 00:26:45.843: //6//TCL :/tcl_PutsObjCmd: TCL AA: +++ No option selected +++.
csim: loop = 1, failed = 0
csim: call attempted = 1, setup failed = 0, tone failed = 1


---After we placed a call from softphone to pilot 1000 and during the call we ran the following command with output as follows--

CME#sh call application sessions
Session ID 4

App: queue
Type: Service
Url: flash:app-b-acd-2.1.2.2.tcl

Session ID 8

App: aa
Type: Service
Url: flash:app-b-acd-aa-2.1.2.2.tcl



________________________________________________________

Thursday, April 15, 2010

Righsizing CME/SRST

Below is the max ephone numbers per platform as the results of this commit

1. 2811 : CME 42 / SRST 42
2. 2821 : CME 58 / SRST 58
3. 2851 : CME 110 / SRST 110
4. 2801 : CME 30 / SRST 30
5. 3825 : CME 185 / SRST 340
6. 3845 : CME 262 / SRST 720

Sunday, March 21, 2010

Cisco Unified CME VoIP Call Transfer Options

Your Cisco Unified CME system by default is set up to allow local transfers between IP phones only. It
uses the Cisco H.323 call transfer extensions to transfer calls that include an H.323 VoIP participant.
To configure your Cisco Unified CME system to use H.450.2 transfers (this is recommended), set
transfer-system full-consult under the telephony-service command mode. You also have to use this
configuration for SIP VoIP transfers.
To configure your Cisco Unified CME system to permit transfers to nonlocal destinations (VoIP or
PSTN), set the transfer-pattern command under telephony-service. The transfer-pattern command
also allows you to specify that specific transfer-to destinations should receive only blind transfers. You
also have to use this configuration for SIP VoIP transfers. The transfer-pattern command allows you to
restrict trunk-to-trunk transfers to prevent incoming PSTN calls from being transferred back out to the
PSTN (employee toll fraud). Trunk-to-trunk transfers are disabled by default, because the default is to
allow only local extension-to-extension transfers.
To allow the H.450.12 service to automatically detect the H.450.2 capabilities of endpoints in your
H.323 VoIP network, use the supplementary-services command in voice service voip command mode.
To enable hairpin routing of VoIP calls that cannot be transferred (or forwarded) using H.450, use the
allow-connections command. The following example shows a call transfer configuration using this
command.
voice service voip
   supplementary-service h450.12
   allow-connections h323 to h323
telephony-service
   transfer-system full-consult
   transfer-pattern .T
The configuration shown in the preceding example turns on the H.450.2 (transfer-system full-consult)
and H.450.12 services, allows VoIP-to-VoIP hairpin call routing (allow-connections) for calls that don’t
support H.450, and permits transfers to all possible destinations (transfer-pattern). The transfer
permission is set to .T to provide full wildcard matching for any number of digits. (The T stands for
terminating the transfer destination digit entry with a timeout.)


The following example shows a configuration for more restrictive transfer permissions.
telephony-service
   transfer-system full-consult
   transfer-pattern 1...
   transfer-pattern 2... blind
This example permits transfers using full consultation to nonlocal extensions in the range 1000 to 1999.
It also permits blind transfers to nonlocal extensions in the range 2000 to 2999.


Notes Regarding H.450.12 and ECS


H.450.12
You can compromise between the H.450.2 and hairpin routing call methods by turning on the H.450.12
protocol on your Cisco Unified CME system (this is recommended). You must be using at least
Cisco Unified CME 3.1 to use H.450.12. With H.450.12 enabled, your Cisco Unified CME system can
use the H.450.12 protocol to automatically discover the H.450.x capabilities of VoIP endpoints within
your VoIP network. When H.450.12 is enabled, the Cisco Unified CME system can automatically detect
when an H.450.2 transfer is possible. When it isn’t possible, the Cisco Unified CME system can fall back
to using VoIP hairpin routing. Cisco Unified CME also can automatically detect a call from a
(non-H.450-capable) Cisco Unified CallManager.


Empty Capabilities Set
For the sake of completeness, it is worth mentioning a fourth alternative for call transfers: Empty
Capabilities Set (ECS). Cisco Unified CME does not support the instigation of transfer using ECS. But
because a Cisco Unified CME router also has the full capabilities of the Cisco IOS Release H.323 voice
infrastructure software, it can process receipt of an ECS request coming from a far-end VoIP device. In
other words, a Cisco Unified CME system can be a transferee or transfer-to party in an ECS-based
transfer. A Cisco Unified CME system does not originate a transfer request using ECS. The problem with
ECS-based transfers is that in many ways they represent a combination of the worst aspects of the
end-to-end dependencies of H.450.2 together with the cumulative problems of hairpin for multiple
transfers. Many ECS-based transfer implementations do not allow you to transfer a call that has already
been transferred in the general case of VoIP intersystem transfers.

Disclaimer : The Extract is from Cisco Systems Documentation

Monday, March 8, 2010

Demystifying Mpps

Throughput Parameter regarding switches has always been an ambiguity, finally i manage to crack it, with some help from guislar and ganesh @ Cisco Netpro.

2960-48PST-S -- 13.3 Mpps

The figure Mpps expresses the maximum number of frames per second that can be processed by the device.
It is not dependent on frame size but clearly small frames require higher packet rates.

To give you an idea of what this number says:
smallest frames in ethernet are 64 bytes in size, taking in account the preamble (8 bytes) and the minimum interframe gap (the last two counts roughly for 20.2 bytes) to fill a GE port in one direction you need
1484560 frame per second.

10^9 / [(64+20,2)*8]

where 8 is bits/byte

so a number of 13.3  Mpps is equivalent to ((13.3 M * (64+20.2) * 8 )) / 10^9 = 8.95 / 2 = 4.47 GE ports filled with smallest frames bidirectional.

on the other hand frames of max size 1518 bytes require 81264 fps to fill a GE port in one direction.

So this number expresses the forwarding capability of the device.
A non blocking device with 48 GE ports would require 2 * 1484560 * 48 as Mpps or higher.

A device like C2960 can be classified as centralized CEF forwarding.


Mpps regarding routers,

MPPS stands for million packets per second and Cisco prefers to refer throughput in MPPS.For a layer-3 switch an Mpps value is shared one. For some of the higher-end cisco routers the routing is "distributed" between multipe line-cards, in which case the PPS numbers are based on the number of line cards, bit for non-distributed architectures (Catalyst switches) the numbers are based on the routing engine, so it is the maximum number of Packets Per Second that the box can route.

and as giuslar said 2960 switches are centralized cef based forwarding.

Switching capacity vs Throuhput
Cisco Catalyst 4900 Series Switch Model Comparison for Fiber Aggregation


Feature and Description

Cisco Catalyst 4928 10 Gigabit Ethernet Switch

Switch Capacity

96 Gbps

Throughput

71 mpps

Switching capacity is some times given as  the amount of frames a switch can deal with over a given time frame and throughput on the other hand means  how much actually data can cross the switch in a given time frame.

Wednesday, February 24, 2010

Jumbo Frames Demystified

The Promise and Peril of Jumbo Frames

We sit at the intersection of two trends:

  1. Most home networking gear, including routers, has safely transitioned to gigabit ethernet.
  2. The generation, storage, and transmission of large high definition video files is becoming commonplace.
If that sounds like you, or someone you know, there's one tweak you should know about that can potentally improve your local network throughput quite a bit -- enabling Jumbo Frames.
The typical UDP packet looks something like this:
udp packet diagram
But the default size of that data payload was established years ago. In the context of gigabit ethernet and the amount of data we transfer today, it does seem a bit.. anemic.

The original 1,518-byte MTU for Ethernet was chosen because of the high error rates and low speed of communications. If a corrupted packet is sent, only 1,518 bytes must be re-sent to correct the error. However, each frame requires that the network hardware and software process it. If the frame size is increased, the same amount of data can be transferred with less effort. This reduces CPU utilization (mostly due to interrupt reduction) and increases throughput by allowing the system to concentrate on the data in the frames, instead of the frames around the data.
I use my beloved energy efficient home theater PC as an always-on media server, and I'm constantly transferring gigabytes of video, music, and photos to it. Let's try enabling jumbo frames for my little network.
The first thing you'll need to do is update your network hardware drivers to the latest versions. I learned this the hard way, but if you want to play with advanced networking features like Jumbo Frames, you need the latest and greatest network hardware drivers. What was included with the OS is unlikely to cut it. Check on the network chipset manufacturer's website.
Once you've got those drivers up to date, look for the Jumbo Frames setting in the advanced properties of the network card. Here's what it looks like on two different ethernet chipsets:
gigabit jumbo marvell yukon advanced settings   gigabit jumbo realtek advanced settings
That's my computer, and the HTPC, respectively. I was a little disturbed to notice that neither driver recognizes exactly the same data payload size. It's named "Jumbo Frame" with 2KB - 9KB settings in 1KB increments on the Realtek, and "Jumbo Packet" with 4088 or 9014 settings on the Marvell. I know that technically, for jumbo frames to work, all the networking devices on the subnet have to agree on the data payload size. I couldn't tell quite what to do, so I set them as you see above.
(I didn't change anything on my router / switch, which at the moment is the D-Link DGL-4500; note that most gigabit switches support jumbo frames, but you should always verify with the manufacturer's website to be sure.)
I then ran a few tests to see if there was any difference. I started with a simple file copy.
Default network settings
gigabit jumbo frames disabled file copy results
Jumbo Frames enabled
gigabit jumbo frames enabled file copy results
My file copy went from 47.6 MB/sec to 60.0 MB/sec. Not too shabby! But this is a very ad hoc sort of testing. Let's see what the PassMark Network Benchmark has to say.
Default network settings
gigabit jumbo frames disabled, throughput graph
Jumbo Frames enabled
gigabit jumbo frames enabled, throughput graph
This confirms what I saw with the file copy. With jumbo frames enabled, we go from 390,638 kilobits/sec to 477,927 kilobits/sec average. A solid 20% improvement.
Now, jumbo frames aren't a silver bullet. There's a reason jumbo frames are never enabled by default: some networking equipment can't deal with the non-standard frame sizes. Like all deviations from default settings, it is absolutely possible to make your networking worse by enabling jumbo frames, so proceed with caution. This SmallNetBuilder article outlines some of the pitfalls:

1) For a large frame to be transmitted intact from end to end, every component on the path must support that frame size. The switch(es), router(s), and NIC(s) from one end to the other must all support the same size of jumbo frame transmission for a successful jumbo frame communication session.
2) Switches that don't support jumbo frames will drop jumbo frames.
In the event that both ends agree to jumbo frame transmission, there still needs to be end-to-end support for jumbo frames, meaning all the switches and routers must be jumbo frame enabled. At Layer 2, not all gigabit switches support jumbo frames. Those that do will forward the jumbo frames. Those that don't will drop the frames.
3) For a jumbo packet to pass through a router, both the ingress and egress interfaces must support the larger packet size. Otherwise, the packets will be dropped or fragmented.
If the size of the data payload can't be negotiated (this is known as PMTUD, packet MTU discovery) due to firewalls, the data will be dropped with no warning, or "blackholed". And if the MTU isn't supported, the data will have to be fragmented to a supported size and retransmitted, reducing throughput.
In addition to these issues, large packets can also hurt latency for gaming and voice-over-IP applications. Bigger isn't always better.
Still, if you regularly transfer large files, jumbo frames are definitely worth looking into. My tests showed a solid 20% gain in throughput, and for the type of activity on my little network, I can't think of any downside.


 Comments Worth Noting:

i have been building networks for broadcasters for over a decade - who always wanted bigger / faster / more type networks.
jumbo frames are great in theory, but the pain level can be very high.
A core network switch can be brought to its knees when 9 Kbyte frames have to be fragmented to run out a lower MTU interface.
Many devices dont implement PMTU correctly, or just ignore responses - video codecs seem particularly prone to this.
and wasnt there a discussion a few newsletters ago about dont try to optimise things too much? If you need 20% more network performance, but you are only operating at maybe 40% load, then you need a faster machine or a better NIC card.
And there have been something like 5 definitions of jumbo just in the cisco product line. Also telecomms manufacturers idea of jumbo often have frames with 4 Kbytes, not 9 Kbytes.....
And just to set the record straight - the reason for the 1514 bytes frame limit in GigE and 10G ethernet is backward compatibility.
Just about every network has some 10/100 (or 10 only) equipment still, and the 1514 limit has been built into other standards such as 802.11 wireless LAN.
the old saying is that God would have struggled to make the world in 7 days if he started with an installed base...

------------------------------------------------------------------------

Just a couple things to point out.
File transfer is typically done using TCP, not UDP. TCP has more overhead than UDP.
I'm curious why we see a sawtooth pattern in the un-jumbo framed graph. Is that TCP Vegas doing its thing?
I'm glad you've gone ahead and tried this out. Jumbo frames wouldn't exist if they didn't have a purpose, but with all the different kinds of traffic I think 1500 MTU is a good choice.
One with jumbo frames that you touched on, but didn't adequately explain, is that most consumer switches use the store-and-forward method of switching packets. This means that your switch must receive the whole packet before it can send it along, it can't be doing anything else because packets can't be multiplexed. This can cause unacceptable latency (you have 2 computers, not a big deal, but between several machines all trying to send data, you can end up with some seriously delayed packets).
I just would have liked to see more reasons not to do this that it's not a supported standard and doesn't work with a lot of hardware. There are other reasons this has not become the default.

----------------------------------------------------------------------

@Bob from what I have seen IPv6 is potentially a bigger problem than IPv4, because where an IPv4 router may see that the packet is too large and fragment it, IPv6 leaves it to the end devices.

---------------------------------------------------------------------------------


Jumbo frames are great. I work on VMware ESX networking, and I will point out what may not be obvious to everyone. In a virtualized environment (hosted or hypervisor) jumbo frames make an even bigger difference, since you are doing more work per packet to begin with. That's why we added jumbo frame support since ESX 3.5 shipped.
My experience is that any recent machine can easily push full 1Gbit line rate (on native, and for that matter ESX VMs). Setting Jumbo Frames will save you on CPU though, which will allow you to run more VMs or otherwise use that power. And while Jumbo Frames are nice- they get you from 1.5k packets to 9, TCP Segmentation Offloading (TSO) is much better, since you push down entire 64k (or sometimes up to 256k) packets, and an engine on the NIC itself automatically handles dicing them into 1.5K packets. Most good nics support this- Intel, Broadcom, etc. On the other side, the reverse is LRO, or RSS, but this is more complicated and less common. Plus with TSO, you don't have to worry about MTU.
The other thing I would mention is- for the love of god, don't run networking benchmarks by doing a file copy. With 1GBit networks, you are limited by your disk speed! Run a proper tool such as iperf (brain dead simple) or netperf, which just blasts data. Even if your hard drive could reach 1Gbit speeds, you would be wasting cycles, so your networking performance would be worse. You always want to look at these things in isolation.

--------------------------------------------------------------------------------------------

The reason that all these people are seeing performance improvements using Jumbo Frame on Windows is because Windows networking stack sucks. Windows is really stupid and often will not let a single tcp stream reach the full capacity of the NIC. I.e. you run 1 TCP stream and measure 400Mbits, but if you ran 3 in parallel you would hit 940Mbits (~Line rate). This is even more annoying with 10G, since you need like 18 streams to reach the peak performance. Linux doesn't have these problems, and will give you its best possible performance on a single stream. I can only imagine Window's behavior is the result of some misguided attempt at ensuring fairness between connections by making sure that even if there is only one connection, it never uses the full capacity.

--------------------------------------------------------------------------------------------

If you simply enable jumbo frames on your NIC, every connection to any Internet destination (which don't support jumbos) will need to undergo PMTU discovery, PMTU blackhole detection, router fragmentation, or other time-consuming / performance-sapping hacks. This might explain why people complain about latency issues with gaming. These people are also seeing slightly slower peformance with all Internet activity.
*nix, as/400/, mainframes, and other operating systems let you set the frame size on a per route basis. E.g.,
route add -net 0.0.0.0 gw 192.168.0.1 mss 1460
This tells the OS to use jumbo frames only on the local LAN, and to assume a normal packet size everywhere else.
Alas, Windows has no such ability. One solution on Windows is to use two NICs attached to the same network. Have one NIC configured with normal frames and the default route. Have the second NIC configured for jumbos with no default route.

---------------------------------------------------------------------------------------

I participated in the IEEE 802.3 committee for a while. IEEE never standardized a larger frame size for two reasons that I know of:
1. The end stations can negotiate the frame size, but there was no backwards-compatible way to ensure that all L2 bridges between them can handle it. Even if you send a jumbo frame successfully, you can still run into a problem later if the network topology changes and your packets begin taking a different path through the network.
2. The CRC32 at the end of the packet becomes weaker after around 4 KBytes of data. It can no longer guarantee that single bit errors will be caught, and the multibit error detection becomes weaker as well.
One is free to enable it, and it does improve the performance, but the situation is unlikely to ever get better in terms of standard interoperability. It will always be an option to be enabled manually.
Also a number of years ago,. jumbo frames provided a much bigger boost. Going from 1.5K to 9K regularly doubled performance or more. What has happened since is smarter ethernet NICs: they routinely coalesce interrupts, steer packets from the same flow to the same CPU, and sometimes even reassemble the payload of the 1.5K frames back into larger units. The resistance to standardizing jumbo frames resulted in increased innovation elsewhere to compensate.

-----------------------------------------------------------------------------

@Timothy Layer 2 ethernet switches will just drop packets they cannot handle. It is not just if they don't handle jumbo frames: they can drop a normal size packet if their internal queues are full, or if rate limiting has been configured, or if the switch hit some other internal condition which the ASIC designer didn't bother resolving. They just drop the packet and expect the sender to retransmit. There is no mechanism for an L2 device to send back a notification that it has dropped the packet. A managed L2 switch will have some counters so you can post-mortem analyze what is wrong with your network.
Layer 3 routers will drop packets for more reasons, in addition to queue congestion. For example when the packet is too big and the don't fragment bit is set, an ICMP message is sent back (this is how path MTU discovery works). Similarly routers send back ICMP messages if they have no route to the destination.
Even the ICMP is just a best effort notification. Routers routinely limit the rate of ICMP messages they will send, to avoid having a flood of ICMP messages make a bad network problem worse. ICMP messages can also be dropped on their way back to the originator. So the best the sender can expect is it _might_ get notified of an L3 problem, sometime.

--------------------------------------------------------------------------


Disclamier: Credit to original Poster
http://www.codinghorror.com/blog/2009/03/the-promise-and-peril-of-jumbo-frames.html

Sunday, February 7, 2010

Configuring LLQ and LFI on a Router

LFI

Link fragmentation and interleaving, why do we need it cisco recommends that for links equal to or lower than 768 kbps needs fragmentation and interleaving for voice packets, links slower than 768 kbps consume higher serialization delay when putting a voice packet, which is inherently large, on the wire. this is recommended for higher sped links.


#access-list 102 permit udp any nay range 16384 32767
#access-list 103 permit tcp any eq 1720 any
#access-list 103 permit tcp any any eq 1720

#class-map match-all VOICE-SIGNALING
#match access-group 103
#class-map match-all VOICE-TRAFFIC
#match access-group 102

#policy-map VOICE-POLICY
#class VOICE-TRAFFIC
#priority 48
#class VOICE-SIGNALING
#bandwidth 8
#class class-default
#fair-queue

#interface multilink1
#ip address 172.22.130.1 255.255.255.252
#ip tcp header-compression iphc-format
#ip rtp header-compression iphc-format

#serivce policy output VOICE-POLICY

#ppp multilink
#ppp multilink fragment-delay
#ppp multilink interleave
#multilink-group 1

#interface serial 0/0
#multilink-group 1

Config description,



#access-list 102 permit udp any nay range 16384 32767
#access-list 103 permit tcp any eq 1720 any
#access-list 103 permit tcp any any eq 1720

in the first statement 102 we are matching UDP ports 16384 to 32767  which are responsible for carrying voice RTP payload.
in the second & third statement we are matching TCP voice signaling port 1720

#class-map match-all VOICE-SIGNALING
#match access-group 103
#class-map match-all VOICE-TRAFFIC
#match access-group 102


Relevant class maps, against configured ACL's


#policy-map VOICE-POLICY
#class VOICE-TRAFFIC
#priority 48
#class VOICE-SIGNALING
#bandwidth 8
#class class-default
#fair-queue

Policy maps are configured with relevant policies, here we have configured 48 kbps of priority bandwidth for voice traffic which is merely a single call so you should configure it according to your situation, one thing very important here the VOICE-TRAFFIC will get 48 kbps of priority bandwidth traffic exceding that will be policed, on the contrary the bandwidth command reserves minimum 8 kbps of bandwidth for VOICE-SIGNALING and it can exceed its threshold of 8 kbps whereas priority cant.

Every thing else will be treated by Weighted fair queuing (fair-queue) which penalizes high talkers i.e. sessions consuming higher bandwidth as compared to low talkers sessions consuming lower bandwidth



#interface multilink1
#ip address 172.22.130.1 255.255.255.252
#ip tcp header-compression iphc-format
#ip rtp header-compression iphc-format

We have to create interface multilink1 to enable LFI.
Then the header compressions will compress the headers accordingly.

#serivce policy output VOICE-POLICY

applies the LLQ to this interface, to be used along with LFI.


#ppp multilink
#ppp multilink fragment-delay 10
#ppp multilink interleave
#multilink-group 1

#interface serial 0/0
#multilink-group 1

the first statement enables multilink.

#ppp multilink fragment-delay 10, statement enables delay no more than 10 ms, on a 56 k link a 1500 Bytes packet takes 215 mSec to be put on the wire which is to much, we need delay something b/w 150-200 msec so fragmentation will definately help.

#ppp multilink interleave, will enable interleaving so the packets that were chip choped by fragmentation are treated voice packets are sent first always.

Then the interleaving and fragmentation is applied on the interface

Configuring LLQ Quality of service

Configuring LLQ on a switch access port.


#interface fastethernet 0/1
#wrr-queue cos-map 1 0 1
#wrr-queue cos-map 2 2 3
#wrr-queue cos-map 3 4 6 7
#wrr-queue cos-map 4 5
#priority-queue out

#mls qos trust device cisco-phone
#mls cos trust cos
#switchport voice vlan 100
#switchport access vlan 10

#switchport priority extend cos 0
#mls qos map cos-dscp 0 8 16 24 34 46 48 56
#mls qos bandwidth 10 20 70 0(0 or 1)

Description and usage of each command is as follows


In this Configuration im going to configure low latency queuing on a cisco switch, applicable to access port connecting to the IP phone daisy chained to a PC.


#interface fastethernet 0/1
#wrr-queue cos-map 1 0 1
#wrr-queue cos-map 2 2 3
#wrr-queue cos-map 3 4 6 7
#wrr-queue cos-map 4 5

In the above config we have configured four queues, 1 0 1, first digit is the queue number and preceding 0 and 1 are cos values reserved for this queue similarly queue 2 3 4 are configured, queue 4 is the main queue for which cos value 5 is reserved, IP phone marks all its traffic with cos value 5  


#priority-queue out

very important command this sets queue 4 to be prioritized in case of bottleneck, PQ traffic is out first.

#mls qos trust device cisco-phone
#mls cos trust cos

trust the cos vlaue only if cisco IP phone is attached, which is not good if you have phones from other vendors,switches utilize CDP to detect if their is a cisco IP phone attached.

#switchport voice vlan 100
#switchport access vlan 10

#switchport priority extend cos 0

Mark any packet from PC with cos 0

#mls qos map cos-dscp 0 8 16 24 34 46 48 56

Switch will mark the DSCP of packets according to the above map, if a layer 3 device encounters the packet and is taking decisions based on layer 3 DSCP markings. the values are mapped against 0 1 2 3 4 5 6 7 cos markings.

#mls qos bandwidth 10 20 70 0(0 or 1)

Sets the bandwidth for each queue, remember queue 4 is PQ so it will be serviced first and prioritized so you can set a 0 or 1 for this queue,0 preferable.

Saturday, February 6, 2010

Configuring Cisco Extension Mobility in CUCM 7

Configuration Checklist for Cisco Extension Mobility 
https://[CUCM_IP]/Help/en_US/ccm/wwhelp/wwhimpl/common/html/frameset.htm
Perform the procedures in the order shown in Table 8-1 to configure Cisco Extension Mobility.
Summary steps in Table 8-1 point out the major tasks that are required to configure Cisco Extension Mobility in Cisco Unified Communications Manager Administration. For a complete set of instructions, be sure to follow the procedure that is listed in the Related Procedures and Topics. 
Table 8-1 Configuration Checklist for Cisco Extension Mobility 

Configuration Steps
Related Procedures and Topics
Step 1:
Using Cisco Unified Serviceability, choose Tools > Service Activation to activate the Cisco Extension Mobility service.
Note : To disable the extension mobility service on any node, you must first deactivate the service for that node in Service Activation.
Note : When a change in activation or deactivation of the Cisco Extension Mobility service occurs, on any node, the database tables get updated with information that is required to build the service URLs. The database tables also get updated when the extension mobility service parameters get modified. The EMApp service handles the change notification.
For information on service activation, refer to the Cisco Unified Serviceability Administration Guide.
Step 2:
Create the Cisco Extension Mobility Service.
Summary steps include
  • Choose Device > Device Settings > Phone Services.
  • Enter the service name (such as, Extension Mobility Service or EM).
  • Enter the following URL: http://:8080/emapp/
    EMAppServlet?device=#DEVICENAME#
Note : If you should enter the URL incorrectly and subscribe the wrong service to the phones, you can correct the URL, save it, and press Update Subscriptions or correct the URL and resubscribe each phone to which the wrong service was subscribed.
  • Select values for Service Category and Service Type.
  • Enter a value for Service Vendor (Java MIDlet services only). (wrong).
  • Select XML.
  • Click Save.
  • Enable Check Box(Must)
Note : For Java MIDlet services, the service name and service vendor must exactly match the values that are defined in the Java Application Descriptor (JAD) file.
Step 3:
Configure administration parameters.
Step 4:
Create a default device profile for each phone type that you want to support Cisco Extension Mobility.
Step 5:
Create the user device profile for a user.
Summary steps include
  • Choose Device > Device Settings >Device Profile and click Add New.
  • Enter the Device Type.
  • Enter the Device Profile Name, choose the phone button template, and click Save.
  • Enter the directory numbers (DNs) and required information and click Save. Repeat for all DNs.
  • To enable intercom lines for this device profile, configure intercom directory numbers (DNs) for this device profile. You configure an intercom DN in the Intercom Directory Number Configuration window, which you can also access by choosing Call Routing > Intercom > Intercom Directory Number. You must designate a Default Activated Device in the Intercom Directory Number Settings pane for an intercom DN to be active.
Intercom Directory Number Configuration, Cisco Unified Communications Manager Administration Guide
Step 6:
Associate a user device profile to a user.
Summary steps include
  • Choose User Management > End User and click Add New; enter user information.
  • In Available Profiles, choose the service that you created in Step 2 and click the down arrow; this places the service that you chose in the Controlled Profiles box.
  • Click Save.
Step 7:
Configure and subscribe Cisco Unified IP Phone and user device profile to Cisco Extension Mobility.
Summary steps include
  • Subscribe the phone and the user device profile to Cisco Extension Mobility.
  • Choose Device > Phone and click Add New.
  • On the Phone Configuration window, in Extension Information, check Enable Extension Mobility.
  • In the Log Out Profile drop-down list box, choose Use Current Device Settings or a specific configured profile and click Save.
  • To subscribe Cisco Extension Mobility to the Cisco Unified IP Phone, go to the Related Links drop-down list box in the upper, right corner of the window and choose Subscribe/Unsubscribe Services; then, click Go.
Cisco Unified IP Phone Configuration, Cisco Unified Communications Manager Administration Guide
Finding an Actively Logged-In Device, Cisco Unified Communications Manager Administration Guide