Wednesday, January 12, 2011

Is there a command in Cisco IOS to show all routes tagged with a specific tag?

In Cisco IOS, if I have a route-map entry as follows:

route-map redistribute deny 10
 match tag 65000 100
!

Is there a 'show' command that will give me a list of all routes that will match that stanza?

EDIT: To those thinking about using 'show ip route' and 'inc', the summary form of show ip route doesn't include tag information:

Router>show ip route
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
       E1 - OSPF external type 1, E2 - OSPF external type 2
       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
       ia - IS-IS inter area, * - candidate default, U - per-user static route
       o - ODR, P - periodic downloaded static route

Gateway of last resort is x.x.x.x to network 0.0.0.0

B    216.221.5.0/24 [20/2948] via 208.51.134.254, 1d19h
B    216.187.99.0/24 [20/0] via 4.69.184.193, 1d19h
B    210.51.225.0/24 [20/0] via 157.130.10.233, 1d19h
...

It is only displayed when you provide a prefix as an argument:

route-views.oregon-ix.net>show ip route 216.221.5.0
Routing entry for 216.221.5.0/24
  Known via "bgp 6447", distance 20, metric 2948
  Tag 3549, type external
  Last update from 208.51.134.254 1d19h ago
  Routing Descriptor Blocks:
  * 208.51.134.254, from 208.51.134.254, 1d19h ago
      Route metric is 2948, traffic share count is 1
      AS Hops 2
      **Route tag 3549**

So one 'show ip route' command doesn't let you get information about all routes tagged with a specific tag.

  • Link to Cisco IOS IP Command Reference also see Table 62 same page

    The following is sample output from the show route-map command:

    Router# show route-map

    route-map abc, permit, sequence 10

    Match clauses:

    tag 1 2
    

    Set clauses:

    metric 5
    

    route-map xyz, permit, sequence 20

    Match clauses:

    tag 3 4
    

    Set clauses:

    metric 6
    
    Murali Suriar : That shows me the configuration of the route-map; I'm looking for a way to find all routes that match the 'match tag' statement.
    From ascrivner
  • I'm assuming OSPF here, but I beleive it's part of the show ip ospf database commands. I think the tag in the following commands is the same one you're referrign to with you're route-map.

    Router# show ip ospf summary-address
    OSPF Process 2, Summary-address
    
    10.2.0.0/255.255.0.0 Metric -1, Type 0, Tag 0
    10.2.0.0/255.255.0.0 Metric -1, Type 0, Tag 10
    
    Murali Suriar : I was thinking more in terms of BGP. In either case, I've yet to find a way that doesn't involve doing a 'show ip route' on every entry in the RIB and looking at the detailed output. :(
  • I haven't fully tried this, but it occurs to me that you could create a dummy route process with a route-map that redistributes matches into it.

    something like:

    router ospf 99

    redistribute bgp 6447 subnets route-map tagtest

    !

    route-map tagtest permit 10

    match tag 3549

    !

    This then should show you all of the tagged routes:

    router# sh ip ospf 99 database

    Murali Suriar : Nice approach; didn't even occur to me. In fact, a 'show ip route ospf 99' would get you a normal show ip route output. I'm a little loathe to kick off a new routing process in production just for diagnostic purposes, but I think this is probably the only way to do it.
    Peter : As long as you don't neighbor it with anything (which would defeat the purpose and generally be a bad idea) the overhead should be minimal. Always use caution when messing with production though.
    From Peter
  • Your output shows BGP, which is the only protocol I know that does this:

    show ip bgp route-map redistribute
    

    Will effectively issue a "show ip bgp" but filtered by that route-map. For the IGPs, Peter's suggestion of a dummy-process is the best I can think of.

    From Geoff

How do you get Exchange 2007 PowerShell commands for property changes?

When you perform certain operations in the Exchange Management Console GUI a window appears showing the PowerShell command that was executed to perform the operation. This is useful for learning how to create a PowerShell script to do the same thing.

Is there a way to get the PowerShell commands that execute for minor operations in Exchange like when just changing various attirbutes in a property dialog?

From some of the answers I can tell my question is not clear. I am referring to the Exchange Management Console which has a GUI not the PowerShell based Exchange Management Shell. In the Console, when you perform operations that use a wizard like adding new users, the final dialog show a text box with the PowerShell command that was executed in the background. I am wondering if it is possible to get those commands when performing minor operation in the GUI.

  • You can run the get-member command on any item in Powershell, to get all the properties and operations for the object, for example the command below will show you all the attributes, properties and operations available on a mailbox:

    get-mailbox bob | get-member
    

    You can get a list of all Exchange Powershell commands on Technet.

    From Sam Cogan
  • get-excommand

    you will see information on 368 cmdlets. You can confirm that using the command:

    (get-excommand).count

    To add some focus to your search for relevant Exchange commands use wildcards with the get-command cmdlet. For example, to find cmdlets relevant to POP3 configuration type

    *get-command *pop**

    which returns information on any cmdlet whose name includes the character sequence pop. The relevant commands are displayed."

    Anapologetos

    Source

  • The Exchange Management Shell will log all of the PowerShell commands it executes if you set a registry key that enables logging. The commands will be logged to the Event Viewer in the PowerShell folder.

    The key can be set by navigating to:

    HKLM:\SOFTWARE\Microsoft\PowerShell\1\PowerShellSnapIns\Microsoft.Exchange.Management.PowerShell.Admin

    using Regedit and creating a string value named LogpipelineExecutionDetails with a value of "1".

    This registry value can also be created using the following PowerShell command:

    Set-ItemProperty HKLM:\SOFTWARE\Microsoft\PowerShell\1\PowerShellSnapIns\Microsoft.Exchange.Management.PowerShell.Admin -Name LogpipelineExecutionDetails -value 1

  • You may want to use the "set-user" command. An example from a script I have:

    set-user -Identity "CN=$displayName,CN=Users,DC=contoso,DC=edu" -City $City -Department "$department"-Office $office -Phone "$phone" -StreetAddress "$POBox`r`n$Office`r`n$Department" -StateOrProvince $State -PostalCode $ZipCode

How do I set the date format to ISO globally in Linux?

I would like to globally set the Linux date format to ISO, which looks roughly like this:

YYYY-MM-DD HH:MM:SS
2009-03-16 15:20:00

With varying levels of detail, such as omitting time, seconds, etc.

I know that for some applications, you can configure this manually, but I'd like it to be automatically set for every program.

I'm specifically using Ubuntu Intrepid, but a general solution that would work across all distributions would be best.

  • Set your locale date environment variable LC_TIME to "en_DK" Set it in your .bashrc or similar, or check man locale for how to set it system-wide.

    On ArchLinux all of the Locale settings are in /etc/rc.conf and customisations are set up in /etc/rc.local

    #!/bin/bash
    # Local multi-user startup script
    export LC_TIME="en_DK"
    
    From
  • Some people would advise to change your local to german "en_DK" this kind of works if you don't mind the day and month names being in german. Since I cannot post hyperlinks,and this board sees my linux commands as hyperlinks.... (nice one)... I can only say you search (google) how-to-change-date-formats-on-ubuntu and click the first link.

    Neil : He meant this link: http://ccollins.wordpress.com/2009/01/06/how-to-change-date-formats-on-ubuntu/
    GodEater : en_DK is not the German locale either, it's danish.
    From
  • Probably the best way to do this, but not break things is to follow the walkthrough at

    http://ccollins.wordpress.com/2009/01/06/how-to-change-date-formats-on-ubuntu/

    From Mez
  • It's explained at length in this guide: http://ccollins.wordpress.com/2009/01/06/how-to-change-date-formats-on-ubuntu/

    Neil : I went and found the link since prestiginate said he couldn't post hyperlinks. And I actually had been there before, but I guess I never bothered doing it on this machine, making me think whatever I tried before didn't work.
    From Neil

How to do things wrong?

Possible Duplicate:
Common mistakes made by System Administrators and how can we avoid them?

Slightly confusing question, but it asks a good question

What are the most common pitfalls that people seem to fall into?

  • People being sys admins? One of my biggest issue is over-training or excessive research. I seem to spend more time reading journals and feeds than I do actually working. It's a tough habit to break.

    JJ01 : Totally in the same boat with you. I once was reprimanded for doing to much research on a project before implementation.
    scotthere : Not sure why I got voted down. I think time management is a legitimate issue for many people. Especially in our industry.
    From scotthere
  • just a few

    • not listening to others [ this starts with requirements gathering.. ]
    • not reading the f... manuals
    • assuming everything will go well [ who needs backups anyway ? ]
    • assuming that if things work now they will in the future [ who needs monitoring ? ]
    • relying blindly on 3rd parties... we have 99.999 internet access sla. [ things will work fine. or not.. in worst case you'll pay 10% less of your monthly bill in exchange for 10h downtime, convince your 'the business' to be satisfied with this ]
    • this leads us to not explaining clearly to non-techies what are consequences, costs, risks and opportunities. and double checking they actually understood.
    From pQd
  • Rushing to conclusions...

    Stop and think. Make sure brain is in gear before engaging mouth. Quit leading me on wild goose chases when troubleshooting problems and wasting my time reporting things that are really by-design behavior.

    From squillman
  • Making assumptions on 'what seems sensible'.

    E.g. A computer isn't turning on, so you think the power supply is shot because that's the sensible thing. Instead the user is calling on their mobile in the middle of a power cut.

    From workmad3
    • Underestimating Murphy
    • Not using checklists
    • Taking shortcuts
    • Assuming that almost complete is "good enough"
    • "Who would want to steal /our/ data?"
    • Implementation without proper testing
  • Document first, then execute

  • This question is pretty close to the post: "Common mistakes made by System Administrators and how we can avoid them"

    I'd like to add that one thing to do wrong is to NOT document your work.

    From l0c0b0x
  • Many of the "fiasco" scenarios that I've been called in to resolve come down to admins not applying consistent and scientific troubleshooting technique.

    When you're troubleshooting a problem in a "black box" (read: closed source software/hardware, 3rd party system, etc), you should change one thing at a time (and document your changes) and exercise a consistent test case with each change. If your hypothesis doesn't bear out, return things back to their original state and start again.

    Lather, rinse, repeat.

    What I see, more often than not, are frazzled admins running around making random changes without documenting what has changed, and without testing whether or not their change made a difference. Before long, the initial conditions are lost. When the issue is finally resolved, no root cause analysis can ever be done because no one is sure what fixed the problem.

    We make a bad name for our trade when we act that way.

  • overconfidence

    From
  • Trusting your end users to tell the truth. Sometimes they may think that if they tell you what really happened that they would suffer some sort of repercussions, or perhaps they don't know which information is relevant. But, at the end of the day, it's best to be skeptical and to ask as many questions as possible.

    From Psycho Bob
  • Motivating Staff

    Get a contractor in to do their job, which you haven’t given them enough time to do, and asked them to do during a time when they’re swamped with other work. And then complain that you’re running over budget.

    RAID

    When trying to rebuild a mirrored RAID array, select the new disk as the place to mirror from….

    Working with Crucial Services

    • Make sure that the only environment a crucial service can run on (the one that takes payments) is an old PC that used to belong to an Ex Developer.
    • Make sure that this box has faulty RAM.
    • Regularly play with the innards of this box.
    • When people ask what happens if the box goes down, tell them it’s a low priority.
    • When the box actually does go down, tell people that it’s ok, there are no issues, and that you’ll have it fixed in 5 minutes…

    Staff Motivation #2

    Break your development team’s dev environments, and delete half of the work they’ve done that week. A week later, ask why they didn’t do unpaid overtime to keep up to speed

    Be the person to break things, then go on holiday

    This morning, the first day our DBA is away on holiday for a week, we find that he’s changed the servers to master-master replication. We’re learning fast about howto fix replication issues, but even we know that ignoring replication issues by default is a bad idea

    All things that have happened to myself in the last... 4 months. Courtesy of http://www.stopyouredoingitwrong.com (my website, and the reason for this question!)

    From Mez
  • Plan for the worst, hope for the best - anything else is wrong in my book.

    From Chopper3
  • My two golden rules

    1. Don't install from source on a binary distribution. It breaks your upgrade path and security patches.

    2. Run the same versions of all packages on development and production.

    Maciej Delmanowski : I would rephrase 1. as: don't install anything by bypassing your package management system. You can install software from sources, but use system tools to create native packages (dpkg-buildpackage on Debian, for example). It will make tracking that software easier in the future.
    From Dax
  • From Deep Thoughts by SysAdmins...some of my favorites

    THe Mack Truck Scenario: If no one will be able to figure this out if you get hit by a Mack Truck then you are doing something wrong.

    If you havent thought of at least one potential negative outcome of hitting 'enter' after the command you just typed then you dont understand the command well enough to use it on a production system.

    If you do it more than once, automate it. If you cant automate it then document it. Document it anyway.

    AND MY FAVORITE - if it seems like someone else may have encountered this problem before, they probably have. Google for the answer.

    From cop1152
  • Two things:

    Make sure you have a solid backup plan for your data.

    Also, it is critical to have a solid support team around you when you get stuck. Being the loan ranger doesn't work.

  • Assuming the user will have enough knowledge of how the software should work and avoid dealing with error handling that comes back to bite you in the butt.

    From JB King
  • Here are my 9 rules to Failure.

    1 - DON'T Think for yourself... lack of confidence. 2 - DON'T Keep it simple... the best way, the hard way. 3 - DON'T Control inner your chaos... AVOID to become a stress Zen. 4 - IGNORE your enviroment... you're the only one! 5 - DON'T Keep Backups... why bother disk space? 6 - DON'T Test Backups... AVOID backups, skip this waste of time. 7 - AVOID exploring new paths of knowledge... better walk in familiar paths. 8 - DON'T Gain extra motivation... winners suck... ordinary fits best! 9 - AVOID Social Media, Trobleshooting, reading manuals... only trust in your experience.

    From
  • Forgetting that there are hundreds or thousands of people who will be affected by the consequences of your actions.

    Failing to get the basics adequately covered in your blind rush to the exciting stuff.

    From mh
  • Most common pitfalls that I've ever seen and fallen into myself, ranked on order of importance/criticality.

    1.) Assumption. Example: "I assumed that the problem had to be with the network card and had been troubleshooting the device for an hour before it had occurred to me to check the cable." This is one of the number one killers I've ever seen. Never assume anything and remember Mr. Holmes's lesson, `When you take away everything that cannot be the problem, what you're left with, no matter how impossible, must be the answer.'

    2.) Arrogance. Example: "I'm the freakin' senior admin, what does the junior think he/she can positively contribute to the troubleshooting?". During an ITR, I've had a web developer point out a very small, yet critical problem in a router configuration that would have saved me hours of troubleshooting. Another set of eyes on a problem can't hurt and many times even is beneficial for training.

    3.) Lack of RTFM. Example: "I've been working with Brocade fiberchannel switches for years. I know how to zone a fabric, ok?". The tech in question ended up creating a a zone for a tape library that consisted of a massive amount of devices all trying to talk to the tape library at once, instead of a one-to-one zoning plan. Without a quick consult with `El Manuel', the tech didn't know he was far outside best-practices. The one-to-one example was in the first three pages.

    4.) Poor change-management/lack of communication and documentation. Example: The umteenth email sent out to a group asking, "Did anyone mess around with the webserver over the weekend, because it's down, we're out of clues and corporate wants it back up ASAP." This is another huge killer. No matter how good of an admin you are, if you didn't document or communicate what you did to fix that 20-hour router outage, and another one goes down three hours after you've finally gone home to get some sleep, you're only a.) looking like a fool and b.) doing yourself harm.

    5.) Bad management/dysfunctional team. Example: Fear of looking stupid or assassination from co-workers causes things to be `swept under the rug', etc. etc. A good team is a reflection of it's leader and vice versa. A team's leader is responsible for a.) Ensuring the entire team gets credit when someone does a stellar job, and rewarding the stellar worker. b.) Shielding the team (AND responsible party) from the wrath of others when someone screws up, taking full responsibility for the problem, privately counseling the responsible one and taking positive steps to ensure it never happens again. A good manager will also remove all obstacles in the path of his or her team.

    Finally, A good leader/manager, especially in tech, is NEVER the smartest guy on the team. Good leaders surround themselves with smarter advisers. A leader's job is to enable the team, not become bogged down and responsible for every little detail, in effect carrying a team who can't get the job done. It becomes a self-defeating fallacy.

    HTH.

How to know if a DNS update is taking too long?

I just transfered a domain from one host and registrar to another. 48 hours later the site is still down. How do I know if I should just keep waiting or if there is a problem?

UPDATE: nslookup says it's a NXDOMAIN (non-existant?) - what does that mean?

UPDATE 2 - SOLVED:

In the transfer godaddy had set the wrong name server - a quick phonecall rectified this.

Thanks for all your answers - I have learnt a lot!

  • I would check with various DNS servers. You can use a tool like nslookup or dig to point directly at a specific DNS server. Start with hosting company's server, then try the ones at your work, from your ISP, your friend's work, etc.

    From kbyrd
  • Do you know what the DNS entry's 'Time To Live' (TTL) is?

    From Chopper3
  • I would follow these steps first:

    1. Query whois to check which DNS servers are to be used for your domain.

      whois domainname.com

    2. Then check these servers are returning results:

      dig A myhost.domainname.com @my.dnsserver.net

    Do this for all of the dns servers listed in the whois changing the @dnsserver.

  • There are a couple of tools you can use to help you see if things are configured correctly. There are web front ends available for them but I am more familiar with command line variants so that is how I will describe them below.

    Check with whois where the domain is currently hosted

    To get the information for this site you would use the command "whois serverfault.com"

    This gives the response (trimmed for brevity):

       Domain Name: SERVERFAULT.COM
       Registrar: GODADDY.COM, INC.
       Whois Server: whois.godaddy.com
       Referral URL: http://registrar.godaddy.com
       Name Server: NS21.DOMAINCONTROL.COM
       Name Server: NS22.DOMAINCONTROL.COM
    

    This information shows the information that the builders of the .com zone file (internic) hold. It also shows that further information is available from the Godaddy whois server.

    If this output does not show your new Registrar and name servers the transfer is not yet complete and you should contact your old registrar.

    If this shows your new registrar but not the correct dns information the transfer has been done wrong and you should contact your new registrar.

    If the details are correct but the site isn't working then either the .com top level zone has not yet been updated and you need to wait, or the new name servers are not set up correctly. To find out which use the command "dig"

    First check the GTLD servers (authoritive for .com):

    "dig ns @a.gtld-servers.net serverfault.com"

    The results give:

    ;; QUESTION SECTION:
    ;serverfault.com.    IN NS
    
    ;; ANSWER SECTION:
    serverfault.com.    172800 IN NS ns21.domaincontrol.com.
    serverfault.com.    172800 IN NS ns22.domaincontrol.com.
    
    ;; ADDITIONAL SECTION:
    ns21.domaincontrol.com. 172800 IN A 216.69.185.11
    ns22.domaincontrol.com. 172800 IN A 208.109.255.11
    

    If this information is correct you can continue to do the same query against your new providers DNS:

    "dig ns @ns21.domaincontrol.com serverfault.com"

    ;; QUESTION SECTION:
    ;serverfault.com.    IN NS
    
    ;; ANSWER SECTION:
    serverfault.com.    3600 IN NS ns21.domaincontrol.com.
    serverfault.com.    3600 IN NS ns22.domaincontrol.com.
    
    ;; Query time: 76 msec
    ;; SERVER: 216.69.185.11#53(216.69.185.11)
    ;; WHEN: Mon Jun  1 16:47:51 2009
    ;; MSG SIZE  rcvd: 85
    

    If this is showing the correct information then you likely have a caching problem and will need to wait for old entries held on caching name servers to expire.

    : dig doesn't return an answer section for the domain!
    Russell Heilling : Which dig has no answer section? a.gtld-servers.net, the authoritive server for your new registrar, or your local resolving name server? If 1, and the whois lists your new registrar then ask the new registrar to check. Likewise if 2, also contact your new registrar. If 3 you may have a negative cache result which you will either need to wait to expire or flush the server cache ("rndc flush" on bind)
  • First you need to check the dns servers for your domain:

    • whois domainname.com

    If the dns servers are still your former dns servers then the transfer failed. If everything is ok, the next step is check the dns records in your new dns servers:

    • dig @dns.domainservers.com domainname.com
    • dig @dns.domainservers.com www.domainname.com

    You need to get a result like:

    • www.domainname.com. 86400 IN A A.B.C.D

    Where A.B.C.D will be the IP address of the server where you're hosting your site. If that is incorrect you need to check your domain records.

    Finally you can try with other dns servers like opendns to check if your domain is updated in the world:

    • dig @resolver1.opendns.com domainname.com

    I think the problem is in step two, maybe you forgot to add the records in your new dns servers.

    tomjedrz : +1 for the troubleshooting guide .. check if domain registration was moved, then check DNS where registration is pointing.
    From HD
  • What happens when you transfer domains between registrars is up to regulations for the TLD or Registry, it might just be that your new registrar is not in possession of the old DNS records anymore and in that case you might want to seed them with new ones. 48 hours is already a long time to be 'off the internet' so i would not hesitate to contact your new hoster/registrar.

    following up on Russell's answer:

    "dig soa serverfault.com @ns21.domaincontrol.com"

    ;; ANSWER SECTION:
    serverfault.com.        86400   IN      SOA     ns21.domaincontrol.com. dns.jomax.net. 2009031400 28800 7200 604800 86400
    

    "dig soa serverfault.com @ns22.domaincontrol.com"

    ;; ANSWER SECTION:
    serverfault.com.        86400   IN      SOA     ns21.domaincontrol.com. dns.jomax.net. 2009031400 28800 7200 604800 86400
    

    The first Number after dns.jomax.net is the Zones Serial Number, quite commonly a date with an added serial for the number of changes that day, it's usually a bad sign if these are out of sync.

    You can also check for your new record directly, say we changed www.serverfault.com recently:

    "dig a www.serverfault.com @ns22.domaincontrol.com +norec"

    ;; ANSWER SECTION:
    www.serverfault.com.    3600    IN      CNAME   serverfault.com.
    serverfault.com.        3600    IN      A       69.59.196.212
    

    the +norec option disables the "recursion desired" bit in the question which is on by default. Authoritative Nameservers that are also caching might set you off with answers from their cache if you forget to specify +norec, this can be quite misleading at times.

    From ZaphodB

Personal repository address on netinstall

We've got a working lenny repository in our office. This is the sources.list line from a machine where the repo works fine:

deb http://fai.foo.com/ftp.es.debian.org/debian lenny main contrib non-free

I would like to install lenny on another machine but using:

http://fai.foo.com/ftp.es.debian.org

as address and /debian/ as directory doesn't work.

Are these addresses correct? Must repo and install image be exactly the same version? (netinstall image: debian-501-i386-netinst.iso)

  • I don't know what foo.com is, but I guess it's just something you made up? Try some of the following addresses.

    http://www.debian.org/mirror/list

    Host name            FTP              HTTP  
    
    Spain
    ftp.es.debian.org
      (ftp.gul.uc3m.es)     /debian/        /debian/  
    debian.com.es      /debian/  
    debian.grn.cat          /debian/        /debian/  
    ftp.caliu.cat           /debian/        /debian/  
    ftp.cica.es             /debian/        /debian/  
    ftp.gva.es              /mirror/debian/   /mirror/debian/  
    ftp.rediris.es          /debian/        /debian/  
    ftp.udc.es           /debian/         /debian/  
    ftp.um.es             /mirror/debian/   
    
    Apiman : "foo" it's just an example. I cannot use a public mirror because of company's firewall. Just company's mirror.
    From
  • Finally I modified /target/etc/apt/sources.list from console (and apt-get update of course) while installing and that made the trick. Anyway, it would be great to know why the elegant way didn't work.

    From Apiman
  • Go to the dists directory inside the debian folder and create a symlink to lenny as stable

    ln -s lenny stable
    

    The installer looks for stable or testing and not for lenny.

    I found this looking at the apache logs when I first encountered this problem.

    From Dax
  • The mirror should be the full path to the base of the mirror. This directory should contain the "dists" folder.

    So, it's basically the same thing as you'd put in your sources.list

    http://fai.foo.com/ftp.es.debian.org/debian

    A quick breakdown of the sources.list URL, which is

    deb http://fai.foo.com/ftp.es.debian.org/debian lenny main contrib non-free

    • deb - this means that this is a binary repository
    • http://fai.foo.com/ftp.es.debian.org/debian - the URL of the repository
    • lenny - the distribution that you're working with (lenny, etch, stable, unstable, etc)
    • main contrib non-free - the components of Debian you wish to use

    In fact, I have an old image from when I was trying to explain it to people before

    http://people.debian.org/~mez/sources.list.png

    From Mez

Nginx ModWsgi Bad?

I was thinking of deploying Nginx with mod_wsgi. However I read this blog:

http://blogg.ingspree.net/blog/2007/11/24/nginx-mod-wsgi-vs-fastcgi/

In here the author of mod_wsgi for nginx says that the very few worker threads can be blocked for a relatively long time, waiting on your script to return, which will slow down the server.

How true is this? Should I just stick to fastcgi or is there something better?

  • I recommend to use fastCGI. Last update of wsgi was in 2008 or earlier)

    Sample Django.conf for fcgi:

    # Django project
    server {
        listen  80;
        server_name www.server.com;
    
        location / {
            fastcgi_pass unix:/home/projectname/server.sock;
    #       fastcgi_pass 127.0.0.1:8000;
            include conf/fastcgi.conf;     
            access_log  /logs/nginx_django.log  main;
        }
    
        location ^~ /admin/ {
            fastcgi_pass unix:/home/projectname/server.sock;
            include  conf/fastcgi.conf;     
                allow 222.222.0.0/16;
                allow 111.111.111.111;
                deny all;
            access_log   off;
                auth_basic "Gimme the key!";
                auth_basic_user_file /etc/nginx_passwd;
        }
    
        location ~* ^.+\.(mpg|avi|mp3|swf|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|txt|tar|mid|midi|wav|rtf|mpeg)$ {
            root   /home/projectname/media;
                limit_rate 2000K;
            access_log  /logs/nginx_django_media.log  download;
            access_log   off;
        }
    
        location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|bmp|js)$ {
            root   /home/projectname/static;
            access_log   off;
            expires      30d;
        }
    }
    
    server {
            listen  80;
            server_name server.com;
            rewrite  ^(.*)$  http://www.server.com$1;
        access_log  /logs/nginx_django.log  main;
    }
    


    Fastcgi.conf

        fastcgi_pass_header Authorization;
        fastcgi_intercept_errors off;
    
        fastcgi_param PATH_INFO      $fastcgi_script_name;
        fastcgi_param REQUEST_METHOD    $request_method;
        fastcgi_param QUERY_STRING      $query_string;
        fastcgi_param CONTENT_TYPE      $content_type;
        fastcgi_param CONTENT_LENGTH    $content_length;
        fastcgi_param SERVER_PORT       $server_port;
        fastcgi_param SERVER_PROTOCOL   $server_protocol;
        fastcgi_param SERVER_NAME       $server_name;
    
        fastcgi_param REQUEST_URI       $request_uri;
        fastcgi_param DOCUMENT_URI      $document_uri;
        fastcgi_param DOCUMENT_ROOT         $document_root;
        fastcgi_param SERVER_ADDR           $server_addr;
        fastcgi_param REMOTE_USER       $remote_user;
        fastcgi_param REMOTE_ADDR       $remote_addr;
        fastcgi_param REMOTE_PORT       $remote_port;     
        fastcgi_param SERVER_SOFTWARE   "nginx";
        fastcgi_param GATEWAY_INTERFACE     "CGI/1.1";
    
        fastcgi_param UID_SET    $uid_set;
        fastcgi_param UID_GOT    $uid_got;    
    
    #    fastcgi_param SCRIPT_NAME   $fastcgi_script_name;
    


    At last:

    su www -c "./manage.py runfcgi method={threaded | prefork} {socket=/home/projectname/server.sock | host=127.0.0.1 port=8000} pidfile=/home/projectname/server.pid"

    Good luck!

  • Because nginx is an event driven system, it is in effect single threaded at lowest level. In other words, not much different to prefork MPM when using Apache. This means that once a request is being handled in the WSGI application running under nginx/mod_wsgi, no parallel tasks can be carried out.

    In prefork MPM of Apache this isn't too serious an issue because the Apache process will not accept a connection unless it is able to handle it immediately and so any other requests will just get handled by another process. This isn't the case in nginx/mod_wsgi however as the use of an event driven system means it can greedily accept many requests at a time even though it technically can only handle one at a time. Those requests will then get processed one at a time and so latter requests which were already accepted by the process will be delayed.

    Further explanation of this problem can be found in:

    http://blog.dscpl.com.au/2009/05/blocking-requests-and-nginx-version-of.html

install .app file on MacOS via script?

Hello.

I have an .app executable generated for MacOSX. Is it any easy way to install this app into Applications so it can be used by end users, via script? (need to install on multiple computers and really don't want to create intermediate .pkg installer for it)

  • Here is a specific example:

    scp -r /Applications/Opera.app sysadmin@CES-iBookGr4-88.local:/Applications
    

    This recursively copies the folder for, in this example, the Opera web browser, from my computer to a computer (named CES-iBookGR4-88) that I have an ssh account on.

    More generically:

    scp -r /Applications/{App.app} {user}@{RemoteDestination}:/Applications
    

    Where App.app represents the application and the remote destination is the IP name or address of a computer that you have a user account on with ssh access. (Turn on Remote Login in the Sharing Preferences).

    To go the other way, do this:

    scp -r {user}@{RemoteSource}:/Applications/{App.app} /Applications
    

    In both cases, you will likely be asked to verify the other computer (just type in yes when asked to id it) and will be asked for the user's password.

    porneL : and if some clients may already have this app, then rsync would work well.

Connecting Tomcat and IIS

When running on Linux I know how to connect Tomcat to Apache. But how is it done when running Windows and IIS?

  • i've done this in the past using the apache mod-jk project. It is a .dll used by IIS that when configured properly, allows IIS to communicate with tomcat and serve jsps.

    http://tomcat.apache.org/connectors-doc/

    From zmf
  • Have a look at this from the Tomcat how-to pages.

    From squillman

A way to finish off a dying hard drive so it can be sent in for "repair"

I have a 500G SATA hard drive in my machine that all the sudden started giving me I/O errors, until Linux simply disconnects the drive. Reboot, and then it works for a random period before failing again.

The drive is within warranty, but I've had bad experience with shops that are unable to reproduce a problem, as the drive doesn't fail all the time. Then they simply send me a bill and the drive back.

What is my best course of action to make sure they can reproduce the problem?

Update: Those of you who have recommended the diagnosis tools, it's a good valid answer, expect as stated in my question, I'm running Linux and these tools do not exist for it. As for 'gaming' the store, it's not about that. The drive is well on its way of being completely unusable without any help from me. I'm just talking about speeding up the process.

Update 2: I don't really know why I decided to ask this here. I was hoping for suggestions like 'do a badsector test', 'try to stress the drive with copying random data to it with dd'. I will say this again, so stop suggesting it or suggesting me not to... I will not in any way terminate my warranty by messing with the hardware itself, that includes; bulk erasers, huge magnets, too much power or anything that will show up when the drive is eventually sent back to the manufacturer.

  • Does the drive manufacturer have a utility to check the drive?

    Often they will provide a utility that you can boot with that will run some diagnostics - this should probably be your first step. Check there website and download if available

    Paul Tomblin : I've had to return two Deathstars, and in both cases the IBM/Hitachi DFT program ran and gave an error code that I could type into their web site and get an instant RMA.
  • I would also recommend ensuring the controller on the bottom side of the drive isn't getting too hot. This sounds like a heat issue to me.

    If you're able to eliminate heat as a cause, then I would call the manufacturer. I've never had a problem when talking with the manufacturer and getting an RMA first. When sending it in, I would also recommend including a detailed description of exactly what you've seen.

    From Rugmonster
  • Drive manufacturers usually provide diagnostic utilities that you can run before sending in the drive. Once you get I/O errors out of their utility, you can include the log and they'll be less likely to contest your problem.

  • I think the best thing to do is to call them and discuss this situation - any form of 'gaming' them is likely to be pointless and very possibly counterproductive. These people are used to dealing with a range of customer's problems and I would imagine they'll be happy to help you if you ask.

    Aaron : I agree. I've had the same issue, and when I tell them the drive is dead, they replace it. I've never had to force a drive to fail to get it replaced.
    From Chopper3
  • I would recommend using SpinRite as well as the manufacturer's tools. I have previously used it to recover data on a dead drive. The great thing about SpinRite is that it can detect the rate of errors (errs per MB).

    Usually when RMA'ing a drive, they make you include a status code of some kind from their diagnostic tools.

    duffbeer703 : -1 Gibson reference
    Brad Gilbert : It would also help diagnosing what exactly is wrong with a given hard drive.
    pcampbell : @duff: sorry you don't like Gibson, but the product's value is undeniable, IMO, as having my a** saved by it.
    From pcampbell
  • Well, you could always take a bulk tape eraser and run it across the hard drive. That will make sure it never works again, ever.

    As long as you don't have moral issues about it.

  • I can strongly recommend to NOT fool them with the "tricks" one may have heard about (high voltage, microwave oven, bulk tape eraser). They are used to handle such things much more often than you or I.

    From lImbus
  • I'm not known for my huge amount of patience, so I'll just answer this myself. Maybe this will help someone later.

    Badblock check

    badblocks -v /dev/sdx1 (replace sdx1 with drive partition)

    Write stress test

    dd if=/dev/urandom of=/dev/sdx (this will of course wipe the entire disk)

    Read stress test

    dd if=/dev/sdx of=/dev/null (reads every sector and sends it to the null device)

    SATA disconnects

    I have a USB-to-(S)ATA adapter that is capable of reseting the USB device if the disk stops responding at any point. This serves as a work-around when Linux disconnects the drive for too many i/o errors.

    carlito : You may wish to use something faster than /dev/urandom to generate "random" data. A larger blocksize (e.g. bs=1024k) would be much faster as well.
    From Andrioid
  • Possibly they aren't really testing the drive thoroughly.

    Give them your documentation of the problem. If that doesn't satisfy your agreement, you have a fundamental problem.

    Based on your description, your problem could be an interaction between the controller and the drive. For example, your controller could be bad at handling a marginal drive. Or you could have a bad controller.

    Ideally your agreement with the vendor would specify whether it is expected/guaranteed to work with your controller - or it would involve them taking responsibility for the controller (and driver) as well.

    I have seen a lot of SATA drives that misbehave in the manner you describe - sometimes as part of the normal course of business, sometimes while in the process of failing. Sometimes it is admitted to be a firmware bug. 500GB drives were especially bad in my experience.

    You will help your case significantly by repeating the problem with a different controller, since odds are there is no promise for the drive to work with any particular controller, or you would not be having this problem.

    From carlito
  • Give it a good workout: http://www.textuality.com/bonnie/

    Few days of that should show if it really is about to kark it.

    Bonnie is in most distro's repositories IIRC.

    From Tom Newton
  • The best way to finish off a dying hard drive?

    If you've got a rubber mallet, whack it with that - it'll break something internally, but not leave any marks.

    Time tested solution - but only if it's under warranty!

    From Mez

How to migrate a web site from one server to another with minimal downtime?

I have a server hosting a web site and other services that needs to be reinstalled. I would like to relocate these services to another server temporarily, with as little downtime as possible. Both servers are in the same data center, and can be on the same network switch.

What is the best technique for moving these services with minimal downtime? The site is database-driven, so ideally I want a "railroad switch" event, where I can ensure all traffic is moved to the new server at once. I don't want to have a situation where the old database gets updates after I've migrated the data to the new one.

Two things I have considered:

Change the DNS to point to the temporary service. The major issue here is that I don't control the propagation time for DNS, and other servers can hold on to the cached results for a while, leaving the site "down" for users that get the old address.

Is there a way to fix that problem with Apache + redirects? I suspect not, since name-based virtual hosting breaks without the domain name, which I can't use because it's stale.

Bind the old IP address to the new server and (temporarily) assign the old server a different IP during reinstallation. I can leave DNS alone in this case.

Are there any other simple solutions I am overlooking?

  • If you own the domain in question you can to have the TTL for that DNS entry changed to a lower value by your DNS admin(say 3-5 minutes). Let this new setting propagate out over the internet for a few days before you make your actual DNS IP change. This should ensure that any cached DNS entries get updated quickly after your change.

    Steve Madsen : My main concern here is that I've heard anecdotal stories of large providers ignoring TTL and caching for days or even weeks, but perhaps I shouldn't worry about users on such networks.
    chaos : Disregard AOL at your peril.
    From Sergio
  • Use replication on your database servers. That will solve your problem with database updates on both servers in middle time.

    From miHost
  • Ideally you set the TTL of the DNS host record way down a couple of days ahead of the scheduled time, but if you have no control over that (or can't work with someone on that) then that's shot.

    If not, the only thing really is to build the new server until it's ready for production and then schedule a few minutes of downtime while you switch out the boxes...

    From squillman
  • You're right that DNS cutover is completely unreliable. What I prefer to do is, at the same time as the DNS is changed, switch the database configuration of the old site so that it connects to the new server's database. Ta-da, all your updates going to one place.

    The site will presumably run slower for people who connect to the old server, of course, but that'll only last until they get in sync.

    From chaos
  • Does your server have a public IP on the server itself. If there is a NAT mapping, you can just change the NAT mapping to have the same public IP point to the new internal IP of the new server.

    I would think it is often better to have a short maintenance page and test than zero downtime myself.

  • If you have Lan speed connectivity between the 2 systems and full access, using drbd (drbd.org) may be a good option to get the data sync'd between the systems before a cutover and back.

    Setup DRBD and let it sync
    Shut down db & web server
    Switch drbd on original machine to secondary
    Switch drbd on second machine to primary
    Change original server IP
    Add old IP to new server
    Bring up db and web server on secondary system

    Flip them around when the original system is rebuilt

    The option to use database replication is good also if your "data" is primarily in the db

    Waiting for DNS propagation even with a low TTL will provide 'inconsistent' results

  • Providing there are no other services bound to the IP then go with switching that over. It doesn't take long and you can be absolutely sure that traffic is going to the correct destination.

    Just be aware of neighbouring machine's ARP caches. It's good practice to use arping -s after the change.

    From Dan Carley
  • Migrating a server.. What a pain.

    Luckily, you have everything in the same data centre.

    It really, however, depends on what apps etc you've got, and making sure that you've configured all of them on the new box.

    Generally, within my workplace, we don't use IP addresses in configurations, we use DNS names. But these DNS names are only ever referred to in /etc/hosts

    This means that if we need to change an IP address for something, we just change the hosts file, and everything points at the new location.

    Again though, it really depends on how you want to do the switchover - gradual or not. You at least need to make sure that the data sources etc etc are exactly the same on both machines, so, create a replicated slave for a database, etc etc. Basically, when you pull the plug on one machine, the other machine should be able to take over instantly, and in the same state. This is a good reason to keep servers seperate. So don't have one server running all your mail, your databases, your webapps etc. Make sure that you don't have a single point of failure.

    Here's how we did a recent server move (from one datacentre to another)

    1. Weeks before, change the TTL on the DNS for the domain(s)
    2. Setup new server, make sure it has exactly the same code
    3. Setup Master<->Master replicated slave in new datacentre
    4. Point everything at old datacentre while testing (SSH tunnels are your friend!)
    5. "Flip the switch" - run a script that changes everything on the all the servers (old and new) to point at the new servers (make sure that you do this all at the same time, or you could get replication issues!)
    6. Migrate DNS records - point the DNS at the new server
    7. Monitor - watch the old servers until traffic tails off.
    8. Decomission old servers

    While this isn't exactly a full "blow by blow" of everything we did. It gives a good overview.

    From Mez
  • It sounds like you might best be served with a relatively simple solution ... because you can tolerate a bit of downtime. I would avoid fooling with DNS, because you have little control over the propagation/caching delays.

    1- build temp server
    2- bring down services on primary server
    3- move/copy key data from primary server to temp server
    4- change primary server to another IP address
    5- change temp server to primary IP address, bring up
    6- fix primary server (on different IP)
    7- bring down services on temp server
    8- move/copy key data from temp server to primary server
    9- turn off temp server
    10- change primary server back to primary IP address, bring up

    The only downtime is when the data is moved between servers, and will vary depending on how the data is moved.

    Note: if you have a firewall and are doing NAT, changing the NAT between primary and temp is a good alternative to swapping IP addresses and will reduce the downtime.

    From tomjedrz
  • I wrote about how to move web server to another on my blog. Includes a lot, including database issues.

    http://mysqlbarbeque.blogspot.com/2009/03/how-to-move-your-web-server-with-no.html

    From Jonathan

Testing RAID

How does one fully evaluate a RAID configuration?

Pulling drives is one thing, but are there tools and techniques for more?

I've considered putting a nail through a running drive (powder actuated nailgun) to see what would happen, or simulating various electrical anomalies (shorts/opens in cable, power overloads and surges, etc).

What should be tested, and how?

-Adam

    • In drives where hot-swap isn't an option, many raid controls (e.g. mdadm on linux) have a set-faulty command that simulates a drive failing.
    • In drives where hot-swap is okay, yank a drive!

    I think your testing should cover the reasonable cases that you plan for. If you're trying to set up a server in the bush, then electrical fluctuations are reasonable test suites. If you're in a data center, the Service Agreement probably covers power.

    If you think a drive wildly exploding inside a rack is reasonable - then test it. Maybe you're setting up a server in a command center in Baghdad. But once again, less likely if you're in Washington State.

    As a general rule, your tests should cover all expected cases:

    • Drive is old and eventually goes bad (find a drive on its last legs, get it running, then pound it till it fails)
    • Drive fails a smart test but seems fine but you want to replace it just-in-case
    • General drive replacement because of size/performance upgrade or you just heard the batch was bad

    And reasonable extreme cases.

    • Server suddenly losing power - okay.
    • Server itself being hit by lightning - not so much.
    • Rack falling over - okay.
    • Rack hit by truck - not so much.
    • Drive being jostled - okay
    • Drive being shot-putted - not so much.

    And most importantly - RAID doesn't protect against drives silently corrupting data! So make sure you're doing hashes and file verification!

    From Tom Ritter
  • It is indeed important to test a drive failing inelegantly if you care about the ultimate reliability of the overall solution. Every failed RAID solution (meaning the redundancy does not protect against failing drives) I have seen is due to the failure to test real drive failures. The normal test is to pull a drive, claim that drive failure has been tested, and move on.

    The best solution is probably to have a collection of marginal drives, or modified firmware that causes inconsistent responses. Only storage vendors are reasonably likely to have this capability.

    I like the idea of putting a nail through a running drive, but the forces on adjacent drives might result in an unrealistically catastrophic failure. Or the complete failure of the drive may result in an unrealistically clean failure.

    If I was allowed to do legitimate testing of a RAID, I would destroy a few drives with varying means. Hook up wires to random components on the drive's board and fry them or short them. Indeed put a nail through a drive if the geometry of the enclosure makes this unlikely to destroy adjacent drives. (I think the resulting jostling of the remainder of the array is a reasonable test). Intercept a drive's data path and return every possible error, nonsensical results, or correct results delayed by random amounts of time.

    Expect drives to return the wrong block sometimes. Expect drives to cause any conceivable electrical problem on their connection.

    My experience is that no one considering a storage purchase wants to do real testing. This could expose real problems. I'd be very interested to hear if there is anyone who actually tests storage reliability - certainly they are not publishing their results.

    From carlito