Sunday, January 16, 2011

Backing up Linux server to Windows server?

I need to back up a (virtual) Ubuntu server. The backup media (an external USB disk) is mounted on the Windows (Hyper-V Server) host. The Windows servers on the same host can simply back up using Windows Backup over SMB.

How should I go about backing up the Linux box, given that it'll end up on an NTFS-formatted disk?

Update

I'm not so sure that Samba will work -- it won't preserve symlinks, devnodes, permissions, etc. Similarly, it's not going to preserve filename cases and other odd characters.

I'd like it to be full-fidelity, so that I can use it for disaster-recovery...

  • Installing (and configuring) Samba will allow you to back up the Linux host just like your other Windows servers.

    From Matt
  • You only need the samba client on Ubuntu - most likely this is already installed. Mount the share with:

    sudo mount -t cifs //netbiosname/sharename /media/sharename -o username=winusername,password=winpassword,iocharset=utf8,file_mode=0777,dir_mode=0777
    

    You can then perform your backup using cp, rsync, or your program of choice.

    Alternate: If you can't use samba, you may want to just create a file on the samba filesystem (or directly, since the USB Drive is on the host, you should be able to access it in your VM) then create a "loopback" filesystem on the drive to do your backup.

    dd if=/dev/zero of=/tmp/test-img bs=1024 count=10000000 # 10G filesystem
    mkfs -t ext3 -q /tmp/test-img
    mkdir /mnt/image
    mount –o loop /tmp/test.img /mnt/image
    
  • You're going to use Samba as the transport-I suggest using tar to create the actual backup. Tar will honor the symlinks and store the permissions as well.

    HERE is a good website I use for understanding tar as a backup program

    From Josh Budde
  • If you make a tarball of your data first, and backup the tarball to the cifs mount like described in other answers, you'll preserve symlinks and whatnot.

    From wzzrd

removing duplicate lines from file with grep

I want to remove all lines where the second column is 05408736032.

0009300|05408736032|89|01|001|0|0|0|1|NNNNNNYNNNNNNNNN|asdf|
0009367|05408736032|89|01|001|0|0|0|1|NNNNNNYNNNNNNNNN|adff|
  • Is it that you want to remove all lines where the second | separated field contains '05408736032'? Will all the lines be formatted the same? If so, this should output the file minus those lines (it's perl that takes the original file as the first argument and the file it's going to as the second).

    #!/usr/bin/perl
    use warnings;
    use strict;
    my  ($file1, $file2) = @ARGV;
    open my $origin_file, '<', $file1;
    open my $newfile, '>', $file2;
    while (my $line = <$origin_file>) {
        my @values = split '/|/', $line;
        print $newfile $line unless $vaules[1] = '05408736032';
    }
    close $newfile or die $!;
    close $origin_file or die $!;
    

    (I haven't tested this, so you probably want to backup the original file before you try it)

    On reading again, you may be looking to grab only lines with a unique second column. This should do that.

    #!/usr/bin/perl
    use warnings;
    use strict;
    my  ($file1, $file2) = @ARGV;
    open my $origin_file, '<', $file1;
    open my $newfile, '>', $file2;
    while (my $line = <$origin_file>) {
        my @values = split '/|/', $line;
        print $newfile $line unless defined $unique{$values[1]};
        $unique{$vaules[1]} += 1;
    }
    close $newfile or die $!;
    close $origin_file or die $!;
    
    From Cian
  • You can do something like:

    for f in `cat $file`; do 
      val=`echo $f | cut -d\| -f 2`
      if [ `grep $val $file | wc -l` -lt 2 ]; then
         echo $f
      fi
    done
    

    but, like most shell scripts, it's pretty inefficient. You'd be better off doing it in perl, something like:

    @infile=<>;
    
    foreach (@infile) {
    
      @foo = split(/|/);
      if exists $found{$foo[1]} {
        $found{$foo[1]}++;
      } else {
        $found{$foo[1]}++;
      }
    
    }
    
    foreach (@infile) {
      @foo = split(/|/);
      if ($found{$foo[1]} < 2) {
        print $_;
      }
    }
    
    From pjz
  • awk -F \| '{if ($2 != 05408736032) print}'
    
    Dennis Williamson : You can leave out the "if" and the "print": `awk -F \| '$2 != "05408736032"'`
    From SergeyZh
  • This might do what you want:

    sort -t '|' -k 2,2 -u  foo.dat
    

    However this sorts the input according to your field, which you may not want. If you really only want to remove duplicates, your best option is Perl:

    perl -ne '$a=(split "\\|")[1]; next if $h{$a}++; print;' foo.dat
    
    From wallenborn
  • Pure Bash:

    oldIFS=$IFS
    while read line
    do
        IFS=$'|'
        testline=($line)  # make an array split according to $IFS
        IFS=$oldIFS       # put it back as soon as you can or you'll be sooOOoorry
        if [[ ${testline[1]} != "05408736032" ]]
        then
            echo $line
        fi
    done < datafile
    

Anyone running Cisco CMS plugin 1.2 on a modern system?

I have a Cisco 3750 that I am trying to use the web interface for. It wants me to download and install a CMS plugin version 1.2 that appears to have been built in 2004. The install of the plugin works, but when I try to run the plugin, I get:

Cisco CMS v1.2 plugin, IE 8, Windows XP/Pro, Java 6 update 16, fails on startup with the message "CMS getParametersFromFileError" with text "AppletParameters property not found" and an OK button. Pressing OK causes the CMS to exit.

The details: Cisco CMS v1.2 plugin, IE 8, Windows XP/Pro, Java 6 update 16.

Any hints would be appreciated.

Update: I get the same error when trying to run it in Firefox on the same system, so I'm thinking it is a Java issue.

  • Generally with Cisco and their propensity for using Java, try back-revving your installed version of Java to version 4 and try it. If version 4 doesn't work, try version 5. Ugh.

    Farseeker : You should see their internal app called C3. I was (un)lucky enough to be there at its launch, even though it was better than their old app, it's still an absolute horror that likely deserves to be on thedailywtf.com
    Farseeker : (It's a monolithic java app, seeing as how I forgot to say how that was relevant)
    David Mackintosh : Nasty java versioning FTW.
    From GregD

How do I enable mutual SSL in IIS7 with a self-signed certificate?

I've created a self-signed certificate in IIS7. Then I exported this certificate to a .pfx and then installed it on the client machine's IE browser. Then I set "Require Client Certificate" on the server's IIS configuration. When I try to visit the site with IE, a dialog box comes up for me to choose a certificate, however, there are no certs in that dialog box. When I click "OK" without choosing any certs, I get a 403 forbidden error. How can I make this work? Appreciate the help in advance.

  • Using Microsoft Management Console, add the certificate manager snap-in and choose CurrentUser. Then import the certificate into the CurrentUser->Personal store.

    Restart IE and you should now see the certificate in the list

    From Wayne
  • Chances are the SSL only contains the Server Extended Key Usage (EKU) and not the client EKU...

    From Dscoduc

A route misbehaving, blocking only *DNS* to other networks

I have a PC with two NICs, one is connected to a LAN (eth0, static IP 192.168.0.254), another to a DSL modem in DMZ mode (eth1, receives public IP from modem).

Yesterday, it suddenly stopped working for accessing the Internet.

I've narrowed down the problem to this (or maybe this is just a side-effect, I'm not sure):

$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.0.0     *               255.255.0.0     U     1      0        0 eth0
link-local      *               255.255.0.0     U     1000   0        0 eth0
default         <public_ip>     0.0.0.0         UG    0      0        0 eth1

Edit: By public_ip I don't mean the actual public IP assigned to this machine, but another public IP, which I guess it's the one assigned to the modem.

With the default routes as above, I can ping to IPs but I cannot resolve domains, so it seems DNS is blocked somehow or maybe it's trying to use the DNS server from eth0.

If I delete route 192.168.0.0, then instead of the public_ip it shows the FQDN. And then I can resolve domains and access the Internet just fine.

If I assign another computer as the DMZ node in the modem, it works just fine, so it has to be something with this PC. I even tried another NIC for eth0, but no dice.

Any ideas?

  • Whats in your /etc/resolv.conf? Could it be trying resolve using something in 192.168.0.0/16?

    Ivan : namserver 127.0.0.1
    From Craig
  • I'm not sure why this problem arised, one moment it was working fine, the next...

    Anyway, the problem seems to be that the modem had an IP address like 192.168.1.254/255.255.255.0 (our internal network is 192.168.0.0/255.255.0.0), and it seems the machine was trying to find 192.168.1.254 inside our LAN (whyyyy!?!?).

    I didn't notice it before because of the public IP assigned to the machine (i.e., not 192.168.1.x).

    So I changed the IP used by the modem, and it now works.

    I'm still wondering what the hell happened here. The best explanation I can find is that our ISP updated the modem's firmware without our knowledge, and this somehow interfered with how it was working before.

    From Ivan
  • Based on your additional answer:

    This was happening because the 192.168.1.254/24 address used by your modem is also within the 192.168.0.0/16 address ranged used on your internal network.

    As you had no specific routing table entry for 192.168.1/24 your PC used the best route it had - the one to 192.168/16.

    From Alnitak

ssh failed logins /var/log/btmp

On a VPS where I host some blogs /var/log/btmp file is fairly old - but is at 6.2 gigs.

I assume this means a lot of failed login attempts? Is this common over the course of a year ? Bots trying to get server access?

  • I'm not familiar with the system which keeps it's ssh login info in that file (mine is in /var/log/authlog) but yes, automated attempts to log into ssh is a common part of what I consider the "background noise" of the internet. Often changing the port ssh listens on can cut this log clutter down considerably, though it's important not to confuse that with making your server more secure from a purposeful entry attempt.

    Josh Brower : +1 for cutting down on background noise, and pointing out that it is not Security through Obscurity
  • If you would like to rotate that log every month you can try add the below code to /etc/logrotate.conf

    /var/log/btmp {
    monthly
    minsize 1M
    create 0600 root utmp
    rotate 1
    }
    
    From Josh Budde

How do you synchronise huge sparse files (VM disk images) between machines?

Is there a command, such as rsync, which can synchronise huge, sparse, files from one linux server to another?

It is very important that the destination file remains sparse. It may be longer (but not bigger) than the drive which contains it. Only changed blocks should be sent across the wire.

I have tried rsync, but got no joy. groups.google.com/group/mailing.unix.rsync/browse_thread/thread/94f39271980513d3

If I write a programme to do this, am I just reinventing the wheel? http://www.finalcog.com/synchronise-block-devices

Thanks,

Chris.

  • I'm not aware of such a utility, only of the system calls that can handle it, so if you write such a utility, it might be rather helpful.

    what you actually can do is use qemu-img convert to copy the files, but it will only work if the destination FS supports sparse files

    From dyasny
  • Rsync only transfers changes to each file and with --inplace should only rewrite the blocks that changed without recreating the file. From their features page.

    rsync is a file transfer program for Unix systems. rsync uses the "rsync algorithm" which provides a very fast method for bringing remote files into sync. It does this by sending just the differences in the files across the link, without requiring that both sets of files are present at one of the ends of the link beforehand.

    Using --inplace should work for you. This will show you progress, compress the transfer (at the default compression level), transfer the contents of the local storage directory recursively (that first trailing slash matters), make the changes to the files in place and use ssh for the transport.

    rsync -v -z -r --inplace --progress -e ssh /path/to/local/storage/ \
    user@remote.machine:/path/to/remote/storage/
    

    I often use the -a flag as well which does a few more things. It's equivalent to -rlptgoD I'll leave the exact behavior for you to look up in the man page.

    : The '-S' is for sparse files, not 'chops long lines'. From man page: -S, --sparse handle sparse files efficiently. I'll give this a try, thanks.
    wizard : Thanks I fixed that - I Was going off of something that was said in the link you gave.
    : No, unfortunately this does not solve the problem. It *does* sync the file, but it turns the sparse file at the far end into a non-sparse file. I am using ssh/rsync which comes with Ubuntu 9.04.
    : My above comment was incorrect. The problem was that rsync creates non-sparse files on its first copy. The --inplace rsync does work correctly, provided that the destination file already exists and is as long (not big) as the origin file. I now have a solution, but it requires me to check whether each file already exists on the target server. If it does, I do an --inplace, if it doesn't, I use --sparse. This is not ideal, but it works.
    : Solved - http://www.finalcog.com/rsync-vm-sparse-inplace-kvm-vmware
    From wizard
  • Take a look at Zumastor Linux Storage Project it implements "snapshot" backup using binary "rsync" via the ddsnap tool.

    From the man-page:

    ddsnap provides block device replication given a block level snapshot facility capable of holding multiple simultaneous snapshots efficiently. ddsnap can generate a list of snapshot chunks that differ between two snapshots, then send that difference over the wire. On a downstream server, write the updated data to a snapshotted block device.

    From rkthkr
  • Could replicating the whole file system be a solution? DRBD? http://www.drbd.org/

    : I don't think drbd is a good solution here, but the idea of rsyncing --inplace the whole fs, rather than the disk-image-files, is interesting. I'm not sure whether rsync allows this - I'll give it a try and report back...
    From James C

How to replace default debian 5 flash installation?

How do I replace the default debian 5 flash installation with the adobe version?

I enabled contrib and non-free repositories in the sources.list file. And installed flashplugin-nonfree.

Logged out and back in, firefox still uses the default one.

I tried removing swfdec-* but that asks for gnome to be removed as well.

  • Try running update-flashplugin --install and see what the output is.

    Also, look into /usr/lib/iceweasel/plugins and perhaps you can manually remove something if all else fails.

    What's in about:plugins ?

    From Josh K
  • update-alternatives --config flash-mozilla.so will allow you to select which Flash plugin is used.

    From TRS-80

Trouble getting FTP login to work in IIS6

Hello all,

I'm trying to setup an FTP site for one of my clients to pickup files from us using IIS6. I've created the FTP site, have set to not isolate users (not necessary as FTP will be read only with authentication).

Here's the problem. The FTP is to be password protected, so I turned of anonymous access on the FTP site. I then created a ftpuser account on the machine, and gave it read and browse directory permissions on the ftp's root directory. However, when I go to test the ftpuser login, I get a 530 "ftpuser cannot login" error. However, if I browse to same directory over HTTP (anonymous access turned off as well) and enter the ftpuser login info, I can download files and browse directories successfully. Why is the ftpuser working over HTTP but not FTP? Shouldn't I be able to login over FTP with the ftpuser login information I just created?

Thanks in advance, - Frank

  • FTP status code 530 is definitely related to a username\password error. Have you tried authenticating to the ftp site as:

    machinename\ftpuser password

    From joeqwerty

copying a file 2 connections away?

I dislike calling scp twice.

I need to connect to box1 via ssh the again to boxNAME. How do i copy a file from boxNAME to my local drive? also how do i copy a file from my local drive back to boxNAME?

alternatively if i can grab a file (either one) and output it and use it as a stdin on the other side that would work just as well. (The files are <4k and text). Bonus point if you can tell me how to create a text document and use stdout to write and save (remember this is across 2 connections not local in which case i'd do cat file >>out.txt)

  • Have you tried something like this ?

    ssh user@host1 "ssh user@host2 'cat file.txt' " > local_file.txt
    
    From SergeyZh
  • I think the easiest method might be to simply use port forwarding. Start a session to the intermediate box and forward some port to allow ssh connection to the far host. Then simply use scp via your port forward.

    If you need to do this regularly, you could create a ssh configuration that uses the ProxyCommand directive.

    Host farhost
        ProxyCommand /usr/bin/ssh username@intermediate "/bin/netcat -w 1 farhost 22"
        User username
    

    Given something like the above you can make connections to the far host as if it was directly connected. For this to work the best you'll want to setup key-based authentication.

    From Zoredache

Files in /var/qmail/alias/.maildir/new/

I run a Gentoo (LAMP server) with a qmail.

I see that the folder /var/qmail/alias/.maildir/new/ contains 100'000s of files.

Do you know how I can have them purged automatically?

Thanks for your great help.

EDIT: I checked the files (+400k actually). They are from 2004-today. They all have the same header (toto.com is a replacement of the real domain):

Return-Path: <#@[]>
Delivered-To: postmaster@mail.toto.com
Received: (qmail 27514 invoked for bounce); 17 Sep 2009 15:46:37 +0200
Date: 17 Sep 2009 15:46:37 +0200
From: MAILER-DAEMON@mail.toto.com
To: postmaster@mail.toto.com
Subject: failure notice

If I use outlook and check the account postmaster I do not see them.

  • each file represents a separate email. if you dont need them you can delete them, i guess easiest way is to do rm -rf /var/qmail/alias/.maildir/ and then use maildirmake /var/qmail/alias/.maildir, if I were you though I'd open some of them just to be sure that you not deleting something important although if you have 100k of emails most likely its outdated/spam/bounces

    From alexus
  • After deleting them you probably want to investigate why you have them.

    They look like double-bounces (the "#@[]" in the return-path/envelope sender), so check a) the contents of /var/qmail/control/doublebounceto (which, if not present, defaults to "postmaster") b) the contents of the relevant .qmail file.

    (I'd take a guess that ~alias/.qmail-postmaster contains "./.maildir/" or similar, as well as some other location that you are actually checking when you check what you believe to be the postmaster account.)

    From jrg

execute a script against a table on a different instance

Hi,

Scenario
Its pretty involved to explain but I have a procedure that updates data in two databases on instance1 and as part of the process (its a cursor for relevant reasons) it needs to make updates to a database on instance2 where certain criteria are met. If the line of code that references instance2 is altered to be the local instance the procedure runs in less than a minute. If it is set to refer to the correct location on instance 2 the procedure takes 30-40 seconds per record (we have never let it complete).

History (this morning)
Reviewing the estimated execution plan, the code that makes the single record update on instance 2 is using a remote scan on a table that is 100k records deep.

I changed this from

Update C set col1 = @val1, col2 = @val2 where col2 = @ID

to

Execute ('Update C set col1 = @val1, col2 = @val2 where col2 = @ID') as user1 at Instance2

where Instance2 is a linked server and user1 is a sql login that has impersonate enable on the linked server. This was so that the update process can make use of the clustered index on col2 and therefore avoid the table scan.

Issue
We are now getting security/authentication errors and the script is failing with

Msg 15274, Level 16, State 1, Procedure "procname", Line 263

Access to the remote server is denied because the current security context is not trusted.

Can anyone advise me what I need to configure to allow this update to execute please? Or, is there a better way that I can get the update to use the index on the table on Instance2? From my knowledge table hints are not allowed on remote queries...?

Many thanks

Jonathan

  • There are two major problems that stood out for me:

    1. Your execute statement is sending the literal strings (e.g. '@val1') within the query to Instance2 because they're all contained within a string. In order to send the values, you would need to change that part of the execute statement to read:

      Execute ('Update C set col1 = ' + @val1 + ', col2 = ' + @val2
               + ' where col2 = ' + @ID) as user1 at Instance2
      

      Note that the code above assumes both @val1 and @val2 are types which do not need to have quotes around them, otherwise you'd use the following:

      Execute ('Update C set col1 = ''' + @val1 + ''', col2 = ''' + @val2
               + ''' where col2 = ''' + @ID + ''') as user1 at Instance2
      
    2. Your query is not benefiting from any potential preoptimization on Instance2. Therefore, I would suggest creating a stored procedure on Instance2 so that you could benefit from pre-runtime optimization of the query, and also use all the optimization hints that you might want to place in the query. So on Instance2, you could create a procedure like this (note again that I have assumed integer datatypes):

      CREATE PROCEDURE user1.UpdateC (@val1 int, @val2 int, @ID int) AS
      BEGIN
          UPDATE C WITH (ROWLOCK) SET col1 = @val1, col2 = @val2 WHERE col2 = @ID
      END
      

      Then, your local script could replace the corrected code in part 1 above with the following (again, assuming integers):

      EXECUTE ('EXECUTE user1.UpdateC ' + @val1 + ', ' + @val2 + ', ' + @ID)
          AS user1 AT Instance2
      
  • MSDN on linked server security: http://msdn.microsoft.com/en-us/library/ms175537.aspx

    Configuring linked servers for delegation: http://msdn.microsoft.com/en-us/library/ms189580.aspx

    Similar problem to yours (i believe): http://dbaspot.com/forums/ms-sqlserver/173869-access-remote-server-denied-because-current-security-context-not-trusted-sqlstate-42000-error-15274-a-2.html

    another solution on SQL Server Central: http://www.sqlservercentral.com/Forums/Topic476794-149-1.aspx

    From SQLChicken

Install Apache on RedHat without internet connection

How do I install Apache + MySQL + PHP on RedHat server without Internet Connection. I know that i can use YUM to install it but because my servers are inside firewall I can't go outside. Is it possible to use my computer (when I'm connection to RedHat via VPN) as a Proxy? if yes then how is it done. Thanks in advance.

  • I'd mount DVD that you used to install REDHAT and create yum.repo file and install it off of that disk, all files that you need is already on that disk, worse case scenario just use rpm -ivh file from that directory and you should be able install it even without yum

    brad.lane : or download the .rpm files to a usb stick, then use good ol' sneaker-net
    From alexus
  • At least you mount your RHEL DVD to your server or not. Other wise your can copy all the rpm from your DVD to your server and createrepo file for the same why your have create repository bcoz of dependency then you can use yum install apache mysql php or use rpm -ivh name of the packest.

    From Rajat
  • You have basically 2 options

    • Download the needed RPMs, copy them to the server and install them manually using rpm -Uvh xxxx.rpm.
    • Set up a local yum repository from a mounted DVD with all RPMs for that distro

    The first one is the easiest, and was the way you installed packages before all this yum/repos magic got working (RH 7.3 anyone? :-)). This method will fail if some other RPM is needed (dependency), in case you will also have to download that RPM and include it in the command line.

    The second one is better. Involves mounting the DVD, use "createrepo" to create repository fils and setup a repo on /etc/yum.repos.d using "baseurl=file://path/to/repo.data.."

    Here is an example.

Configuring htaccess to show authentication prompt only for subdomain

How do I write the htaccess so that it will only require authentication when on admin.example.com, but not on www.example.com (like by using some if-else clause)?

Background: I have a site running in two modes: The admin mode should be reached at something like admin.example.com, whereas the normal mode would be www.example.com -- but both should point to the same directory & scripts within them (the scripts then turn on certain editing features by checking if the script is accessed from the admin subdomain).

Edit: I can now see this has been asked and answered at StackOverflow, though I can't get the top answer to work for me...

  • Check:

    • that all the needed modules are installed
    • that all the directives can be used in .htaccess files.
    • The needed allowoverride settings are in the httpd.conf for the server to allow the use of .htaccess files
    From Craig

IIS MODULE_SET_RESPONSE_ERROR_STATUS

Hello, Im getting 404 error on my web application, I also try doloto : http://research.microsoft.com/en-us/projects/doloto/ , but reports tell me is an compression issue, I dont think so cause is just one page with some ajaxcontroltoolkit features.

this is the stack :

ModuleName IIS Web Core 
Notification 16 
HttpStatus 404 
HttpReason Not Found 
HttpSubStatus 0 
ErrorCode 2147942402 
ConfigExceptionInfo  
Notification MAP_REQUEST_HANDLER 
ErrorCode The system cannot find the file specified. (0x80070002)

You can check it on : http://209.45.92.194:82/

  • There's something wrong in your web.config, a path to a request handler is not right. Verify all of the path information is correct in the <httpHandlers> section of your web.config. Here's an MSDN article that discusses registering HTTP handlers.

    From squillman

Cpanel and add-on domains

I have a cpanel on my hosting and I have created a directory holding my new website.

Is it possible to set the main domain (where the webpages are normally stored in the root) to a subfolder?

I have done this with addon domains but im not sure how to do it with the main domain name.

Thanks for the help. Ian

DHCP / PXE Diagnosing - Is there a wget/wfetch for DHCP?

Hi

I am having a few problems with a DHCP / PXE environment, I think I know what the problem is but I really just want to see what the server is getting back.

I love tools like Wget and Wfetch for seeing exactly what is being sent back and was just wondering if there are any similar tools for DHCP that just allow me to see what is being sent back.

FYI - This is in a Data centre, I have no physical access. I rent about 7 servers and have no physical access to the switch. The servers also rely on having a fixed IP setting, I have not had much luck trying the usual techniques.

I am currently in Windows PE, but have also tried from within 2008.

When I say usual techniques, I have tried using Wireshark and Microsoft Network Monitor however I have not had any luck due to the fact that having a static IP does not issue a DHCP request.

I can have access to the BIOS and basically want to boot another server off the network - I can get in to PXE and boot Windows PE or several other (modified) ISO's, but I am just having problems with a new ISO I want to test and I think that it is getting incorrect parameters so I just want to see the raw DHCP request.

  • You don't mention the OS in question, what problem you're seeing, whether you're the DHCP / PXE server or client (or both), and what "usual techniques" you've tried so far. Any or all of those details would be helpful.

    Having said that, I'd look to whatever sniffer software your OS has available as the first stop along the way. Presumably one or more of your computers is having the problem, so sniff there, since you can't access the switch and perform any kind of port mirroring, etc.

    When I'm trying to diagnose a problem "on the wire", the first tool I reach for is a sniffer because I want to see what's happening on the wire.

    Give us some details and we'll see what we can do.

    Wil : Sorry, editing the original question
    Evan Anderson : No need to be sorry... >smile< Just want to give you the best information possible.
  • You could set up a virtual machine on one of your servers and enable DHCP on the VM. Then you can test as much as you like with a sniffer running on the VM without having to make any potentially damaging network configuration changes to any of your servers (especially since the servers are rented and they most likely have vendor-specific management utilities installed and possibly some custom configuration by your hosting provider to optimize performance within that environment).

Connect an iPhone to OpenVPN

Is there any way to connect an iPhone to our company OpenVPN server?

  • As of the latest iPhone OS release (3.0) this is not possible. I am unsure if someone could create an 'all in one' client that connects to OpenVPN and then allows you to browse the web from within the same app.

    But certainly any app that connected to OpenVPN could not then be put into the background allowing you to use the built in Mobile Safari or Mobile Mail over the VPN.

  • I've looked into this and unfortunately its not possible, and it's unlikely to happen any time soon. Whilst its relatively easy to write the software for the iPhone, the problem is that it needs a tap/tun interface to create the openVPN tunnel, which is not present in the iPhone currently.

    From Sam Cogan
  • There seems to be a solution for jailbroken iPhones here : http://code.gerade.org/tunemu/

    I haven't tested it though.

    Marc : Thanks a lot for the hint! Maybe something could be made by developing an app which includes this code and an integrated web view. So we could at least browse the web servers on the VPN. Of course it would be way better to have this system-wide.
    From
  • I have it working in a jailbroken iphone 3.x. Detailed instructions can be found here: http://chandraonline.net/blog/?p=22

    Marc : @That's great, thanks. Now if only it worked on non-jailbroken iPhones...
    From cs123
  • There us a easy solution there : www.guizmovpn.com

    Marc : (+1) This one needs a jailbreak, so it won't be a solution for most people including me.
    From Guizmo

DNS CNAME Record windows 2003 R2 server by IP address

I am trying to add a CNAME record to point to an ip address. I have added the CNAME record to the primary DNS server that looks like this:

foobar Alias(CNAME) 192.168.50.11

I restarted the DNS server service. When I ping the new name I get the following:

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

ping foobar

Pinging 192.168.50.11 [67.215.75.132] with 32 bytes of data:

Request timed out.

Request timed out.

Request timed out.

Request timed out.

Ping statistics for 67.215.75.132: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

I have no idea where it is getting the 67.215.75.132 address.

nslookup returns:

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

nslookup foobar

Server: 1server

Address: 192.168.10.10

Name: 192.168.50.11

Address: 67.215.75.132

Aliases: foobar

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

1server is the machine with the DNS server service.

  • You should set a CNAME to point to the host name defined in the ADDRESS (A) record of the target host instead of the IP. So whatever hostname 192.168.50.11 is is what goes in the record, not the IP.

    foobar Alias(CNAME) targetmachine

    Tony : I do not have a A record for the machine. It is not part of the Domain.
    squillman : It doesn't have to be part of your domain. It just has to be resolvable. In that case, though, you'd put the fully-qualified domain name in the CNAME record.
    Tony : I added a A record for the machine with the IP address 192.168.50.11. I then pointed the CNAME to the new host record and that works.
    squillman : Good deal :)
    From squillman

Hardware H.264 video encoders for production use?

I'm involved in a project that will involve encoding H.264 video from several sources for live, real-time transmission over the 'net, and it'd be nice to avoid having to dedicate an entire CPU-heavy server for every 1 or 2 sources.

Some searching for hardware H.264 encoders turned up cheap USB gadgets with their own custom software targeted at home use. Undoubtedly useful devices, but unfortunately this is a commercial application that needs reliability and the ability to "play well with others" (e.g. be integrated into an existing software stack without too many changes to said stack).

So... What kind hardware options are there for real-time H.264 encoding in a professional environment?

  • There are a few of those gadgets in the pro-sumer range, but as you suspect you'll need something more commercial. You're likely going to end up looking at a 4 or 8 channel PCI-E card.

    The big question though is resolution - are you encoding an HD source, or an standard definition NTSC/PAL source? For the later there are plenty of solutions on the market, most targeted at surveillance systems and with a healthy price range (sub $1000 and even sub $500). For real time HD the technology does exist (there are several 1 chip solutions that can do full 1080p encoding with the lag time measured at just a few dozen lines), but will usually cost you more.

    You're concerns about "playing well with others" will also continue even with commercial solutions. Remember that many of these systems are made to work out of the box, and there isn't a really good PCI/USB standard for sending pre-encoded video.

    An alternative where there is already a good standard in place is IP cameras - or specifically look for an "analogue camera to IP camera" conversion box (also called a Video Server). Basically it's a box about the size of a cable modem (or larger for models supporting a lot of channels), that converts the input from a non-IP camera (or other video source) into H.264 and sends it off through the net just like an IP camera.

    As an example here is a manufacturer (note that I just Google'd them up, so I can't give any recommendation for their particular products)

    From David
  • Depending on your requirements you might look into using Adtec encoders. I've used them and had no problems, but I haven't tried any other so I don't know how they compare.

    From Amuck

Router Linksys RV042 WAN2 problem

I've got a brand new Linksys RV042, firmware 1.3.12.6-tm, completely factory configuration, and the WAN2 port doesn't appear to work - it never gets an IP address.

I've got a DSL and Cable connection, either work plugged in to the WAN1 port, neither work plugged in to WAN2. It shows the link light and activity, but never gets an IP address and never tries to send traffic over the link.

I've tried a bunch of things, including switching from failover to load balancing, setting it as the default port, setting a static IP address, changing the WAN2 mac address, but nothing works.

I've seen a few other mentions of this - is the router just a dud or is there something I'm missing?

Routing select hosts through vpn

I have a PPTP VPN connection set up on an ubuntu 8.10 box as ppp0 and I was wonder how to route select connections to go through the VPN.

For example I want google.com to go through the default interface, but bing.com to route through ppp0.

Could I do this with a routing rule? Or is something like this more cut out for iptables?

  • You can do this using a routing rule but you will have to add all ip-addresses of bing to the route through ppp0. In the real world this is much harder. Google for example has a lot of ips and is even getting more. So you will have to update your rule to reflect the addresses currently used.

    abronte : The bing and google are just examples, the hosts i will be using only have 1 ip. What would adding a route rule like this look like?
    abronte : this can be done by doing "route add -host gw "
    Wienczny : You could also use "ip route add $IP dev $DEVICE"
    From Wienczny
  • Apologies if this is considered as a thread hijack but I think in theory it answers Abronte's question, we just need the answer to do it in a practical sense.

    Ok, I have to same problem. I have a VPN which I want to route traffic for specific websites through. I know how to do it I just don't know which utilities to use and how to configure.

    The proposed solution:

    Firefox -> Foxyproxy (filtering on regex) -> HTTP proxy -> VPN interface.

    I have Foxyproxy set up to use the HTTP proxy when it matches a pattern. The bit where I am stuck is getting a HTTP proxy that will send request out on a specific interface. I have tried 'tinyproxy' but it does not seem to take notice of the 'bind 192.168.100.170' which is the IP address of my ppp0 VPN interface.

    Can someone suggest a HTTP proxy that will allow this.

    From JRT

Exchange webmail configuration

I am running Outlook Web Access and I need to enforce HTTPS. However, I cannot use forms-based authentication. I have to use HTTP authentication. When a user tries to access to the site via HTTP I need to redirect them to HTTPS and present the http authentication dialog. How can I accomplish this?

  • Figured it out already. Create an asp page that redirects to https, and then set the IIS 403.4 error redirect to this file.

    From Shawn

Resizing Xen guests using LVM

I have a RHEL 5.4 server running as a Xen Dom0, and wish to install several RHEL 5.4 DomU guests using LVM as the guest disks. I have created the following two LVs:

xen-test02-root  VM-VG -wi-a-   6.00G
xen-test02-swap  VM-VG -wi-a- 512.00M

I used the custom partitioning option when installing the guest so no LVM is used in the guest, only 2 disks. One for / (xvda) and one for swap (xvdb).

This all works fine, but now I wish to test extending the root partition. So far, I have tried using lvextend from the Dom0. This works:

# lvextend -L +4GB /dev/VM-VG/xen-test02-root
  Extending logical volume xen-test02-root to 10.00 GB
  Logical volume xen-test02-root successfully resized

fdisk shows that the disk is now 10.7GB:

# fdisk -l /dev/VM-VG/xen-test02-root

Disk /dev/VM-VG/xen-test02-root: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

                     Device Boot      Start         End      Blocks   Id  System
/dev/VM-VG/xen-test02-root1   *           1         783     6289416   83  Linux

I now wish to extend the partition on that disk with parted:

(parted) print

Model: Linux device-mapper (dm)
Disk /dev/mapper/VM--VG-xen--test02--root: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type     File system  Flags
 1      32.3kB  6440MB  6440MB  primary  ext3         boot

(parted) resize 1 32.3kB 10.7GB
Error: File system has an incompatible feature enabled.
(parted)

Any clues as to what I'm doing wrong? Is parted the best tool to resize partitons? Should I be using LVM differently for Xen guests?

Many thanks, z0mbix

  • Why are you partitioning the LV, instead of just using it directly? Also, if you are going to be manipulating the partition table, it's best to do it in the guest. Worst, it looks like you might be trying to fiddle with the partition table in the dom0 while the domU is still running... dangerous.

    My simple recipe for resizing a domU disk, which I've done probably in excess of a hundred times by now, is to have the domUs with the LV as the full root partition (xvda1) and then running:

    lvextend -L+NG -n domu-root vg
    xm shutdown -w domu
    xm create domu
    ssh domu resize2fs /dev/xvda1
    

    And voila, all done. For non-root filesystems, you can just detach/reattach (useful for swap, in particular), but root needs the reboot.

    z0mbix : Thanks, I would like to use the LV directly, but the RHEL installer requires that I partition /dev/xvda to create /dev/xvda1 for the / partition. Is there a way around this? I'm not editing the partition table when the domU is running.
    womble : I avoid RHEL like the plague, so I'd just use xen-tools to create the domU and avoid the RHEL installer altogether.
    From womble
  • Your problem here is that you can't resize ext3 partition with parted. you have to remove the journal (turning ext3 into ext2) and then resize.

    see this for more info

    http://www.hermann-uwe.de/blog/resizing-ext3-partitions-with-parted

  • In your XEN config, don't attach the LV to xvda, attach it to something like xvda1 etc. The xvda device in your domU won't exist, but your domU will still see /dev/xvda1 as a valid partition.

  • # lvextend -L +50GB /dev/VolGroup01/fileserver.home 
      Extending logical volume fileserver.home to 300.00 GB
      Logical volume fileserver.home successfully resized
    
    # e2fsck -f /dev/VolGroup01/fileserver.home
    e2fsck 1.39 (29-May-2006)
    /dev/VolGroup01/fileserver.home: recovering journal
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    
    
    # resize2fs /dev/VolGroup01/fileserver.home 300G
    resize2fs 1.39 (29-May-2006)
    Resizing the filesystem on /dev/VolGroup01/fileserver.home to 78643200 (4k) blocks.
    The filesystem on /dev/VolGroup01/fileserver.home is now 78643200 blocks long.
    

    done!

    Chris S : Welcome to Server Fault. Please don't post "me too" or "thank you" type message on Server Fault. This site is for Questions and the associated Answers. If you have any new questions please use the Ask Question button in the upper right corner of every page.
    From Paul C