Tuesday, January 18, 2011

Event log notification by email?

Is there a way to have windows server email me a message anytime a user logs on to server?

  • Sure, you could put a command into /etc/profile. Something like:

    echo "$USER logged in" | mail -s "Login Notification" user@example.com
    

    I just tested this on my Debian server and it worked just fine.

    ErikA : Hah - I missed the "windows" part. Nevermind. I'll leave my "answer" up, though, in case it might be helpful for someone.
    zsharp : thanks for the tease
    Nick Kavadias : @zsharp install cygwin if you wish to use this solution!
    From ErikA
  • On Windows server 2008 create a task using the task scheduler. Choose create task , and on the trigger tab choose to begin the task on login for any user. Under actions choose send an email. You might also want to set the seting under the settings tab to run a new instance in parallel if the task is running to catch multiple logons.

    E-mail is sent using NTLM authentication for Windows SMTP servers, which means that the security credentials used for running the task must also have privileges on the SMTP server to send e-mail. If the SMTP server is a non-Windows based server, then the e-mail will be sent if the server allows anonymous access. If it's a Non-windows email relay you can instead kick off a script that send the email (which you can then specify the credentials used to relay in.

    zsharp : how to specify user pass and user name for the remote smtp?
    From Jim B

Dynamically add Server 2008 NLB Nodes

Hi All,

I have a small NLB cluster for Terminal Servers. One of the things we're looking at doing for this particular project (this is for a college class) is dynamically creating Terminal Servers.

What we've done is create policies for a certain OU, that sets the proper TS Farm properties and installs the Terminal Server role and NLB feature. Now what we'd like to do is create a script to be run on our Domain Controller to add hosts to the preexisting NLB cluster. On our Server 2008 R2 Domain Controller, I was thinking of running the following PowerShell script I've kind of hacked together.

Any thoughts on if this will work? Is there any way I can trigger this script to run on the DC once all the scripts to install roles are done on the various Terminal Servers?

Thanks very much in advance!!

Import-Module NetworkLoadBalancingClusters

$TermServs = @()
$Interface = "Local Area Connection"

$ou = [ADSI]"LDAP://OU=Term Servs,DC=example,DC=com"
foreach ($child in $ou.psbase.Children)
{
  if ($child.ObjectCategory -like '*computer*') {$TermServs += $child.Name}
}

foreach ($TS in $TermServs)
{
  Get-NlbCluster 172.16.0.254 | Add-NlbClusterNode -NewNodeName $TS -NewNodeInterface $Interface
}
  • What OS are you running on the TS servers? Are you adding these TS servers at any kind of regular interval? If you are, then the script you want to run on the DC could be scheduled to run every X minutes/Y hours. I don't have an environment to test this.

    From Marco Shaw

redirect to domain without www

I use IIS7. how do i config it so that if the user types www.serverfault.com it redirects to serverfault.com like on this site?

  • huh? do you mean that if the user types mydomain.com or www.mydomain.com that it goes to the same web site? if so, then you need to do two things:

    1. make sure your domain A record points to the ip address of the web server. make sure your www record points to the ip address of the web server (obviously).

    2. add host headers to the web site in iis7 for both url's. you'll want to add mydomain.com and www.mydomain.com as host headers on the web site.

    From joeqwerty
  • I believe you want edit your zone's DNS records and not mess with IIS. Add a CNAME record named WWW to point to yourdomain.com. So www.yourdomain.com would resolve to yourdomain.com and leave out the WWW.

    James Deville : as Scott mentions, this will give you 2 sites with the same content with 2 urls. It also won't redirect. www.foo.com will go to www.foo.com and foo.com will go to foo.com. OP wanted www.foo.com to redirect the browser to foo.com.
    Dscoduc : This doesn't force the user to choose www.domain.com or domain.com... Unless you are using a reverse proxy solution , like ISA, IIS is exactly the place where you would make this change.
  • There are 2 good ways to do this in IIS7. URL Rewrite is great if you have it installed. With that you can create a rule to redirect www to non-www. Another option is the HTTP Redirect feature in IIS.

    To use the IIS HTTP Redirect (easiest method), do the following:

    • create a 2nd site with a host header binding of www.yourdomain.com
    • BE SURE to point it to a different path on disk since in IIS Manager updating the HTTP Redirect will update your web.config file which you don't want to update for your main site. Just point to a deadend folder since it isn't used for anything else for reading the web.config file.
    • For your www site, turn on the HTTP Redirect and set the value to http://yourdomain.com. Set the status code to 301 (permanent)
    joeqwerty : @Scott: Thanks for the insight. i thought host headers and the appropriate DNS records were all that was required. Thanks for clearing it up for me.
    Scott Forsyth - MVP : @joeqwerty. Often times it is all that is needed. People don't usually mind having 2 domains (with and without www) pointing to the same site. It's getting more common now for people to ensure that only 1 domain name points to their site search engine positioning (SEO) reasons.
    joeqwerty : @Scott: well then i learned something new today... ;)

Opera unite as a daemon

Is it possible to run an opera unite server without the browser?

Say for instance I want to use my always-on FreeBSD server, which doesn't run X.

  • It actually needs a running X to draw the widgets: unless the application has an option to be ran as a daemon or something, there's no straight way.

    But let's think: if we can redirect its display socket via SSH, maybe, we can accept all X requests and just do nothing? YES! Here's the receipt:

    First, you'll need to ssh -X user@server opera from a remote machine to setup opera via GUI to have it running. Done? Ok, close it then.

    Now you can fool it to think X is running: Xvfb - a "fake X server" - is made for this kind of thing: it emulates a dumb framebuffer using virtual memory. There's a script xvfb-run in the xorg-server package that makes it easy. Note that you still have to install the X server to get Xvfb (unless there's a separate port out there).

    Cheers!

    From o_O Tync

Why do I get HTTP 500 errors trying to serve PHP content out of an IIS 6 virtual directory?

I know PHP is working, because when I browse directly to a file in wwwroot, it's served fine. And the virtual directory is working, because when I browse to html files in the virtual directory, they're served. But when I browse to a PHP file in the virtual directory, I receive an HTTP 500 error from IIS.

Searching on the web, I found a bunch of suggestions to confirm that doc_root in php.ini is blank - however, it is and it's still not working.

Other configuration settings going on: Integrated Windows authentication is turned on, client certificates are required, and client certificate mapping is enabled. All of it is working fine for PHP content not in a virtual directory.

Thanks for any suggestions!

  • See what you can to do find out which 500 error you're getting. Make sure that you have IE friendly errors turned off (if you're using IE), and if you still don't get a good error, check the IIS logs to get the 500 sub-status code. That will likely give a big clue as to the exact error and what to do about it.

Windows 7: Windows Server 2003 not appearing

Hi-- I have a SOHO Windows Server 2003 network with three clients, all Windows 7 (upgraded from XP and Vista). The network has been up and running for about a year, with no problems. Since the client upgrades, the server intermittently disappears from the "Network" window listing in Windows Explorer. Sometimes it's there; sometimes it's not.

I can get to the network easily enough by entering "\MyServer" in the address bar--that gives me normal access, just as if I had double-clicked the server icon. And the server shows up when I do an Active Directory search. The server is just missing from the list displayed in the Windows 7 Network window, which suggests to me that the problem is a configuration tweak on the Windows clients. BTW, I have

Any suggestions on what I need to do to get the server to appear consistently in the Windows 7 Network window?

  • If I recall correctly, this functionality takes place through the Master Browser service as opposed to AD (which would correspond with your note that searching for the Server in AD or accessing it directly works fine). I would start by taking a look at the "Troubleshooting the Microsoft Computer Browser Service" KB article here: http://support.microsoft.com/kb/188305

    From Sean Earp
  • Browsing works through Netbios and always worked better w/ WINS installed although in a very small network it may not matter. Win7 (& Vista) try to use DNS instead of Netbios so it's important that DNS is configured correctly.

    On the server, from a cmd prompt: ipconfig /registerdns

    Since you have AD implemented then DCHP & DNS should also be configured on the server: DHCP should be configured to update DNS and have "update DNS" credentials configured.

    DHCP s/b disabled on all other network devices, check the router/firewall and any routers that are plugged in for use as an Access Point.

    If all of the above is OK, then install/enable WINS service on the server (0xh for the type) and then add the Server IP to the DHCP scope for the WINS Server option.

    Reboot the clients, or from a cmd prompt: ipconfig /renew

    From Ed Fries

If LTO-3 full backup takes more than one tape. What is my next step hardware wise?

OK. some givens to factor in

I use Backup Exec 12.x/13.x, Have server 2003/2008 environment including Exchange.

I have Backup to Disk (Full/Diff) happening that is independent of the backup to LTO (Full/Diff). For more than one reason I'd rather not just go to backing up from disk to tape I'd like to keep the backup direct to LTO happening.

I currently have a single LTO-3 drive with no sort of loader/robot/library. The box serving the LTO drive has an Adaptec 39160 Ultra160 SCSI card in it. I currently use one tape for Full (one per week) and one tape for Diff (four days a week before the tape is taken out). The Full backup is bumping up against the 372.5GB barrier and when it does the backup doesn't finish Saturday it's still waiting for a tape on Monday morning.

Ward mentioned putting the second LTO3 full tape in on Monday afternoon/evening after normal business hours. The problem with this is compared below:

Normal flow

  • Friday insert LTO3 tape 1 for Full backup for week 1
  • Monday insert LTO3 tape for differential
  • Tuesday, Wednesday, Thursday differentials use tape that was inserted Monday
  • repeat for week 2

2 LTO3 tapes for Full backup flow

  • Friday insert LTO3 tape 1 for Full backup for week 1
  • Monday insert LTO3 tape 2 for Full backup for week 1
  • Monday insert LTO3 tape 1 for Full backup for week 1 (for verify process)
  • Monday insert LTO3 tape 2 for Full backup for week 1 (for verify process)
  • Tuesday insert LTO 3 tape for Differential
  • Wednesday, Thursday differentials use tape that was inserted Tuesday
  • repeat for week 2

The extra tape swaps eat 6+ hours into Monday (starting from the time I put in the second tape). If I did that at 5PM I'd be here until almost midnight swapping tapes. That's not counting the idle time on Sat/Sun/Mon waiting for a tape.

Now I could turn off the verify process and save two tape swaps and shorten the "backup" process by several hours but I can't just put tape 2 in and walk away at the end of the day if I don't turn verification off. Having the backup spill over onto a 2nd tape lengthens the backup process but it also

  • Increases the number of tapes in the rotation (cost)
  • Increases the number of tapes in transport (size/weight of briefcase going to off-site storage)
  • Increases the complexity of the backup process by making me stay on site after hours for verification process
  • Increases the complexity of managing backup/restores from my office which is not right next to the server room. This goes quadruple for dealing with such issues from home.

And yes I'm not going in on Saturday to sit there for 6+ hours and babysit the tape drive. I'd like to have a life outside of work. 12 hour days M-F are bad enough when they happen. I'm not going to permanently tie myself to a 6 day workweek.

The tape drive is a Dell PowerVault 110T LTO3. The backup server is on Gigabit Ethernet using only a single NIC and can fill a full tape in about 12 hours.

I can change the backup process to separate one of the more intensive servers to a full backup on its own LTO to temporarily hold this decision off but soon I think I'll need to choose one of these options:

  1. Buy a LTO-3 drive and take advantage of just having a second physical tape available. This is a less desirable option and only makes sense if LTO-3 drives are considerably cheaper than LTO-4 drives which is not the case.

  2. Buy a LTO-4 drive and use the LTO-4 tapes for full backups and use LTO-3 tapes for differentials until the LTO-3 tapes get rotated out and new LTO4 tapes match the price of LTO3 tapes. This will probably get me through the weekend backup for years to come without having to swap tapes. This also partially addresses shoeshining since LTO4 has lower minimum speed than LTO3.

  3. Buy something that can feed tapes in automatically. I'm assuming there isn't something I can add to the PowerVault 110T and this would mean a purchase of a new device that has the tape and loader in a single unit. This is probably not cost effective vs just getting a drive and manually loading tapes but going autoloading LTO4 would be the ultimate in convenience. I'll let the boss above me decide between single tape drive and autoloading drive.

Evan Anderson mentioned in another solution that you could buy drives around this price range

 LTO-4 (internal drive, 1 tape / day) - $2,766.00  
 LTO-4 (autoloader, 1 tape / day) - $4,566.00

but I don't know specifics on what he or you would recommend for the actual drive and if necessary controller. Show me a newegg URL (or Dell, or HP, or whatever your favorite vendor would be) for your solution if you don't mind looking it up or just give me a brand and a model number and I'll be glad to do the leg work myself.

I'm looking to make a needed purchase some time down the road before this backup rotation gets to be too cumbersome. I probably have a few months.

Xenny mentions the age of the servers and speed of backup. The Exchange server is 6 years old (though the hard drives are much newer). There are a couple of 4 year old servers in the mix with consumer grade sata drives (WD6400AAKS). Servers I consider "new" are 2 years old at this point.

Backup to disk from the old exchange server has been as fast as 2184 MB/min but in general backup to disk is just as slow as backup to tape in this setup. In fact backup to disk is sometimes slower than backup to the LTO-3 tape drive. I've also had issues with drives failing and lack of bays to add more drives. In general backup to disk is even more of an issue than the LTO3/4 transition but that belongs on a different question on serverfault if I wanted input on that subject.

I'll just pick some numbers from a recent backup to give you an idea on speeds. This is not a complete list but gives you an idea on the variety of speeds involved. I plan to update this soon in the format of oldspeed MB/min newspeed MB/min where oldspeed is the old SCSI 320 LTO3 and newspeed is the SAS LTO4.

DC C: ~850 MB/min
DC system state ~700 MB/min
Exchange Server C: and system state ~500 MB/min ~600 MB/min
Exchange Server D: ~1400 MB/min ~1200 MB/min
Exchange Server First Storage Group ~1100 MB/min ~700MB/min
Webserver C: ~600 MB/min ~950 MB/min
Webserver E: ~1700 MB/min ~1950 MB/min
Fileserver C: ~500 MB/min
Fileserver E: ~1500 MB/min ~2200 MB/min
Fileserver G: ~1800 MB/min ~2400 MB/min
Fileserver system state ~650 MB/min
faxserver C: ~400 MB/min ~550 MB/min
Accounting server C: ~1300 MB/min ~1775 MB/min
Accounting server D: ~1500 MB/min ~2250 MB/min
Accounting SQL instance ~1600 MB/min
application server C: and system state ~700 MB/min ~900 MB/min
backup server C: 700 MB/min ~1800 MB/Min
backup server E: 1350 MB/min ~2900 MB/min

Monitoring the Fileserver I saw numbers that make me think the raid controller is holding back the transfer rates. The controller is SATA 1.5 but the drives are 3.0 capable. I noticed after changing volumes from RAID 1 to RAID 10 and getting no increase in speed for the backups. Unfortunately doubling the sustained read speed had no affect on backup to the LTO3 tape drive.

In general backup straight to LTO gives me a decent benchmark of where my servers are I/O limited. The servers that are backing up below 1500 MB/min are generally slow disk wise and the ones between there and 2400 MB/min are still low hanging fruit. For example the Exchange 2003 server is getting low on disk space and continues to expand the database for the First Storage Group out to slower portions of the disks. This server will be replaced with a Exchange 2010 server with faster processors and more disks. The other servers will get disk upgrades and/or SSDs added.

http://en.wikipedia.org/wiki/Tape_drive mentions "When shoe-shining occurs, it significantly affects the attainable data rate, as well drive and tape life." but it doesn't mention shoe-shining reducing effective capacity of a tape. After looking at archival tapes from the bank I can confirm about 2% to 15% space wasted on the LTO3 tapes. Nowhere near enough to keep me from moving to LTO4 or an autoloader but it could be significant. For those of you with Backup Exec you can calculate your shoeshining waste by:

  • Making a backup job that will backup around 100% of the tapes native capacity without compression. Disable compression on the drive and software when running the test.
  • look in the media tab of backup exec and compare the "used capacity" column to the "Data" column. If compression is off and the numbers match you aren't shoeshining at all.

In my case I had a archival LTO3 tape with 272.4 GB "used" but only 233.67 GB "data" and another with 400.6 GB versus 395.19 GB. I also tried a backup to LTO4 without compression and got 833 GB "used" with only 786.77 GB "data". Obviously the shoeshining will vary from my environment to yours but before this I didn't think to test it. Hopefully this will make it clear to you how to figure out how much wasted tape you have in your backup environment.

edit: new info at http://www.fujifilmusa.com/shared/bin/LTO_Overview.pdf showing minimum tape speeds for LTO3 and LTO4. It looks like the IBM LTO4 actually has a lower minimum speed than the IBM LTO3. Either way my average server is too slow to feed LTO3/4 without shoeshining. I'm concerned even my backup to disk local volumes will be too slow to feed the drive quickly but I'll have to test that.

Pulling IBM full height drive info from PDF above I get

LTO4 : 30-120MB/s 800GB native (45-240MB/s compressed)
LTO3 : 40- 80MB/s 400GB native (60-160MB/s compressed)
LTO2 : 18- 35MB/s 200GB native (27- 70MB/s compressed)
LTO1 : 15- 15MB/s 100GB native (30- 30MB/s compressed)  


Update: The server I was using for backup started giving me stop errors so I moved the tape drive to another server. The old SCSI controller was an Adaptec 160 the "new" controller is a LSI based 320 (at least I assume the external connector is a 320 as the 4 hard drives inside the server mention 320 SCSI in the server management).

The new server situation leaves me without backup to disk temporarily until I get an external enclosure for direct attached storage. In general this LTO discussion has pointed me towards buying more hard drives for my servers. I will have work to do reconfiguring RAID arrays to increase the speed of the backup and hopefully increase the reliability of the overall setup.

Update 2:Comparison below uses a old fileserver whose raid controller bottlenecks all transfers at ~40MB/s so ideal would be about 2400MB/min. This is about the speed needed to test the edge of shoe shining. Presumably the data flow will not be perfectly regular and will force speed matching almost all the way through the test.

I no longer know the buffer size and buffer count I used on the speed test of the old LTO3 drive but it doesn't change it much at all I got maybe 100MB/min gain by tuning buffers. The test data is about 20GB of scanned tifs and jpgs. I did these tests on a Friday afternoon and I didn't repeat the tests enough times to average the data or otherwise weed out invalid data. Testing after hours, choosing different data, and other variables could noticeably affect these tests.

The same servers are used in all tests. The old drive is on a 320 SCSI LVD controller that is PCIx. The new drive is on a PCIe LSI 3801E SAS controller. It is possible that the drive controller and/or the LTO3 tape drive are bottlenecks. I won't be testing the individual components, only the old pairing vs the new pairing. The server running Backup Exec has 4GB ram, 32bit Server 2008 standard, Pentium D 3.2GHz dual core CPU.

Network connectivity is by way of a 1Gb switch both servers are on the same switch. I have a Remote Desktop Connection open but with the backup going + that connection the Gb connection is less than 50% utilized at worst and averages more like 25% usage.

So as rough as the test methods are I feel reasonably confident that the bottlenecks aren't in a variable that I'm ignoring.

Short Test Results:
~1500 MB/min using Dell LTO3 drive and LTO3 tape compression ON, 64KB block size (many buffer counts tested, best result listed here)

~1800 MB/min using Quantum Superloader3 LTO 4 drive with a LTO3 tape (same tape as above) compression ON, 64KB block size, 64KB buffer size, buffer count 10, highwater count 0, Write Single block mode ON, Write SCSI pass-through mode ON

~2150 MB/min using Quantum Superloader3 LTO 4 drive with a LTO3 tape (same tape as above) compression ON, 256KB block size, 256KB buffer size, buffer count 10, highwater count 0, Write Single block mode ON, Write SCSI pass-through mode ON
~2200 MB/min using Quantum Superloader3 LTO 4 drive with a LTO3 tape (same tape as above) compression OFF, 256KB block size, 256KB buffer size, buffer count 10, highwater count 0, Write Single block mode ON, Write SCSI pass-through mode ON

~2050 MB/min using Quantum Superloader3 LTO 4 drive with a LTO4 tape compression ON, 256KB block size, 256KB buffer size, buffer count 10, highwater count 0, Write Single block mode ON, Write SCSI pass-through mode ON
~2250 MB/min using Quantum Superloader3 LTO 4 drive with a LTO4 tape compression OFF, 256KB block size, 256KB buffer size, buffer count 10, highwater count 0, Write Single block mode ON, Write SCSI pass-through mode ON

~2050 MB/min using Quantum Superloader3 LTO 4 drive with a LTO4 tape compression ON, 256KB block size, 1MB buffer size, buffer count 10,highwater count 0, Write Single block mode ON, Write SCSI pass-through mode ON
~2300 MB/min using Quantum Superloader3 LTO 4 drive with a LTO4 tape compression OFF, 256KB block size, 1MB buffer size, buffer count 10, highwater count 0, Write Single block mode ON, Write SCSI pass-through mode ON

~2200 MB/min using Quantum Superloader3 LTO 4 drive with a LTO4 tape compression ON, 256KB block size, 1MB buffer size, buffer count 20, highwater count 0, Write Single block mode ON, Write SCSI pass-through mode ON
~2300 MB/min using Quantum Superloader3 LTO 4 drive with a LTO4 tape compression OFF, 256KB block size, 1MB buffer size, buffer count 20, highwater count 0, Write Single block mode ON, Write SCSI pass-through mode ON

It's clear that tuning block size is more important than buffer size. No matter the block or buffer size you use, you will get better performance turning off compression if your source data can't keep up with the tape drives minimum data matching rate. Unfortunately that is a per drive setting not a per job or per tape format setting so you can't just restrict compression to full backups or to LTO3 only. You will also have to test how much of an issue it is with your combination of hardware/software. Of course that hit in performance is minor and the more important tests will be to optimize the Full backup of 600GB to 800GB instead of 20GB. I'll try to update again once I have a few weeks or months of backups done.

  • inevitably, backups exceed the capacity that you originally planned for. Here's what i would suggest and say about your situation:

    1. So the Full backup exceeds the capacity of one tape. Then use two tapes.

    2. Follow Symantec's recommendation and continue doing your backup to disk, then backup those disk backups to tape. schedule the backups to disk to occur after hours when fewer resources are in use. schedule the backups to tape to occur anytime during the day after the disk backups are complete because the backups to tape don't have any impact on production systems.

    3. Think of your backups for the week (Full and Differentials) as being part of the same backup set. if it takes two or three tapes per week, then so be it.

    4. schedule the backups to tape to occur only during the week when you're there to swap the tapes.

    i have a similar situation, i'm using a dell powervault 110t lto2 drive and here's what i do:

    1. on saturday i take a full backup to disk (backup to disk folder for full backups).

    2. sunday through friday i take incremental backups to disk (another backup to disk folder for incrementals).

    3. monday through friday i take backups to tape of the full and incremental backup to disk folders. when the tape reaches it's capacity i swap it out. if it reaches capacity in the middle of the night, i swap it out the next morning and the tape job finishes.

    4. after fridays backup to tape job i swap tapes for the next week. the two tapes i pull out are the full and incrementals from the current week and go into my 4 week rotation. now i know that all of the current weeks backup data is on one tape set, stored off site.

    rinse and repeat

    joeqwerty : as a side note, i wouldn't spend any money on additional tape drives, autoloaders, etc. you can't rightfully think that you need to buy new hardware every time your backups exceed the capacity of a single tape. think of the backups that occur during any given week as part of the same backup set, whether that takes one, two, three tapes, etc.
    pplrppl : When backing up directly to tape if the 2nd tape isn't inserted before the data changes there will be errors logged in the backup. Since you backup to disk then to tape you are avoiding this issue. I'll test backup to disk to tape for this reason and to test the shoe shining effect in the near future. You should however be less cavalier about the just use a 2nd tape suggestion. There are significant issues to consider.
    joeqwerty : @pplrppl: Cavalier? Seriously? You think my advice was cavalier, as in an offhand and disdainful dismissal of an important matter? Why don't you tell me how you think my advice was cavalier. Also, why don't you educate me on the significant issues related to using a second backup tape as it's apparent that I'm lacking in my understanding of proper backup techniques. Thanks much.
    joeqwerty : @pplrppl: Also, while you may disagree with my answer and may even deem it technically incorrect or inferior there's no need to pass judgement on my suggestion as being cavalier as that implies a lack of good intention, or even carelessness on my part. IMHO there's no place here for that type of comment.
    pplrppl : Well it's hard to clearly comment on the problems with your question because you have duplicate numbers in your lists. If we say the first list is A and 1. in that list is 1A then I suppose we can use that to specify.
    pplrppl : 1A suggests that you think there is no problem with using a 2nd tape and I shouldn't worry about it. You may want to reread the question as I've edited it and have detailed more than one reason why this is an issue. 4A suggests I do the backups only when I'm there to swap tapes which ignores the fact that a Full backup takes 12 hours or assumes that there will be no issues with running a full backup during business hours. 3A again takes the tone of it not being a problem "so be it".
    pplrppl : Overall I'm assuming you have a tape loader or are so used to "backup to disk to tape" that you aren't familiar with the issues that occur when a backup job is still running for two or three days or backup direct to tape during business hours. And you seem to assume that backup to tape during business hours won't be problematic because you expect me to use the "backup to disk to tape" model. I don't mean to be rude but I did notice you make a similar response to another question and I don't think it is as helpful as you may have intended.
    joeqwerty : @pplrppl: I had posted another answer in response to your comments about my original answer, but in all honesty it was rude and unbecoming to post it from one fellow IT person to another, so I deleted it. I'll say this however, in the future it would be best not to use words like "cavalier" when responding to someone's attempt to help you as it's bound to get then riled up. If you don't like their answer then simply dismiss it and forego the impulse to respond to it.
    pplrppl : Well I'd gladly buy you a drink and ask you to help me run through some alternative words to use for the next time. I only felt it necessary to respond to make it clear to other readers that I was looking for further input. My apologies if I crossed the line.
    joeqwerty : Accepted. My apologies as well. Keep that drink on tap for me. ;)
    From joeqwerty
  • We do something similar to joe:

    1. saturday: full backup to disk, when that's done start the full backup of that to tape
    2. monday: at the end of the day, stick in a second tape and let the backup finish
    3. mon-fri: differential backups to disk only

    If you really have to do the disk-tape independent of the disk-disk backup, I'd live with the two backups being slightly out-of-sync:

    1. Start the disk-disk and disk-tape on saturday, the disk-disk will finish and the disk-tape will be waiting for a second tape on monday
    2. Finish the disk-tape on monday (I'd still wait until end of the day to put the tape in).
    3. Mon-Fri, do your disk-disk differentials (actually, I see that you don't say your doing that, but I'm assuming you do)
    4. Tue-Fri, do your disk-tape differentials

    I don't see a problem with having slightly different sets of files backed up on the two different media. In almost all cases, you're going to restore a file from the disk backup, with the tape just a fallback or an easy way to organize multiple backup sets.

    pplrppl : If the backup to tape Full second tape starts at end of business (after 5 or 6 PM say) then you can't do differentials Tue-Fri as mentioned in your point 4. At best this would leave me with tape two of the Full in Mon morning and Differential to tape occurring on Tue, Wed, Thur, with tape 1 of the next full going in on Fri.
    From Ward
  • As an asdide, note that 100MB/min is far below the minimum speed for tape streaming with LTO 3, so you're probably losing a fair amount of capacity with the tape stopping and starting (ie you're probably getting better than 1.5:1 compression, but this is lost in gaps in the data on the tape). This will probably be rather worse with LTO 4, as I think the minimum speed has gone up.

    Disk - Disk - Tape will help with the minimum speed problem, and will give you some capacity for free.

    If you're not doing it, strongly consider some kind of scheduled defrag of the disks on the servers you're backing up. 1000 MB/min is not a great level of throughput for gig ethernet on reasonably modern hardware. I'd expect that on even 2 year old machines you should be able to get 1800MB/min (that's only reading from the server disk at 30MB/sec), so there's scope for improvement.

    Edit: For LTO 3, you really want a 256KB block size for best performance.

    WRT shoe shining, There's no time for the tape to rewind if the buffer runs empty briefly, so it'll leave a gap on the tape.

    pplrppl : If I can confirm that shoe-shining effects usable capacity you'll definitely get modded up. If the capacity effect is significant I'll create a new question as this concept is not directly tied to the question.
    pplrppl : Neither Backup Exec nor Windows offers 256KB block size. The highest either is offering me is 64KB blocks. It may be that a higher block size could be specified in the hardware RAID arrays but not all of my servers have hardware RAID.
    pplrppl : Since the PowerVault 110T LTO-3 is an IBM full height drive moving to LTO4 by anyone other than HP would actually lower my minimum transfer rate see edit above
    pplrppl : I got 1.536:1 on the latest full backup. Backup exec rounds up and calls that 1.6:1. 578.53GB compressed down to 367.6GB on tape. It will take me weeks if not months to get the backup speed optimized.
    Matt Simmons : You're also reducing drive life with shoeshining
    pplrppl : Not just the drive but also the media as well. It is increased wear and tear in every sense. But as I stated in the update to the question LTO4 drives can have a lower minimum speed and would reduce the shoeshining.
    From xenny
  • Here's one option that could help you get by for a while:

    Have you considered splitting your backup into two separate data sets? Depending on how your files are organized, you might be able to easily divide it into two logical chunks (ie. by department). You would do a full backup of the first dataset on Thursday night, and a full backup of the second dataset on Friday night. Each night after that would run two jobs onto a single tape, a differential for each dataset.

    This way you're not coming in on weekends and you aren't having to babysit a drive while waiting for a verify to complete. In addition, you get the added protection of not having all of your eggs in one basket, so to speak.

    pplrppl : I have definitely considered a similar strategy. My biggest concern with splitting the jobs up is how the tapes would be handled when I'm not around (and to/from the bank). I have 3 weeks of vacation stored up and if I leave who will keep up with a complex tape rotation? I was out for my fathers surgery recently and the person asked to put the tape in didn't insert the new tape on Friday as needed. I came in on Sat after checking my email and seeing the tape request. Adding more tape swaps on any day of the week increases the chances the backups won't happen as planned.
    From Nic

Restricting SSH shell access to Debian server

Hello.

Still new to the whole Debian thing so bear with me.

The only thing I want a user logged in via SSH. No files or directories (like /etc, /var) should be visible at that point.

The only thing the user can do is to "su" to login into root and then administer the sytem.

This is done to increase security. Every little bit helps right?

Apparently chroot is not that secure (Saw an answer in here the said that. Can't seem to find the link tho).

  • You can use ForceCommand to avoid giving any shell access to the user.

     ForceCommand
             Forces the execution of the command specified by ForceCommand,
             ignoring any command supplied by the client and ~/.ssh/rc if pre-
             sent.  The command is invoked by using the user's login shell
             with the -c option.  This applies to shell, command, or subsystem
             execution.  It is most useful inside a Match block.  The command
             originally supplied by the client is available in the
             SSH_ORIGINAL_COMMAND environment variable.  Specifying a command
             of ``internal-sftp'' will force the use of an in-process sftp
             server that requires no support files when used with
             ChrootDirectory.
    
    From Gleb
  • The only thing the user can do is to "su" to login into root and then administer the sytem.

    This is done to increase security. Every little bit helps right?

    wrong.

    if you want root to be able to login, you should just allow root to be able to login.

    logging in as a user and then doing su to become root is a very complex and less secure way of doing that:

    • in general anything may fail, so more steps you ask the user to make, more chances are that one step goes wrong (with "wrong" I mean you did not do something right and someone hacks your server)
    • you would be forcing to enter root password, while it could be considered much more secure to use ssh keys to login without having to enter any password. Or to use ssh keys AND having to enter their pass-phrase, for added security

    if you don't really know what you want, you'd better keep it simple and use standard programs written by people who do know what they are doing. It's clear you are not a security expert, hence consider that anything you may came up with has already been done by someone that is an expert, or has been discarded as worthless, or never occurred to them since it just does not make sense.

    you probably want to know how to use ssh-keygen and how to configure sshd_config.

    From Lo'oris
  • If you're really looking to limit access, then I'd suggest using sudo and enable only the commands necessary for this user to administer the applications you choose.

    From Eddy

repairing mysql replication

We were running out of the disk space on slave because of relay-bin files, so I stopped the mysql server, deleted the relay-bin files. changed my.cnf file to point the relay log to another location. In Slave status, I noted the 'Relay_Master_Log_File' and 'Exec_Master_Log_Position'. I used them to 'Change Master'. It doesn't work. I get at the prompt, It says Failed to open the relay-log at the old position. How Mysql is still looking at the old files, and how I can change it?

Thanks.

  • To get it working right away, your best bet may be to : mysqldump --master-data --databases db_name > snapshot20091124.sql and scp it over to your replication slave. You can double check the log position by paging through the first few lines of the mysql dump. issue a "stop slave;" , do an import "mysql -u root -p < snapshot20091124.sql", then issue a "start slave". Replication is a sometimes needlessly complicated beast.

    From Clyde
  • Try to restart your slave server to flush out the cache.

  • You should have originally deleted the logs by issuing RESET SLAVE.

    Still, if you:

    • Are happy with the consistency of the slave's data to-date.
    • You have a record of Exec_Master_Log_Position and the corresponding log filename.
    • The master still has the logs which correspond to this log position and filename.

    Issue STOP SLAVE and RESET SLAVE. This will remove all replication related information from the slave including relay-log.info which is likely to be causing the error you see.

    Use CHANGE MASTER .. to reconfigure the slave with your log position, host, username, etc.

    Then kick it back up with START SLAVE.

    From Dan Carley

SQL Server 2008 SSIS package compatability

I am trying to save an SSIS package on a sql server running 2005. The issue I have is that I am using SQL Server Management Studio 2008 on my local machine to do this and it won't let me save the package in on the server because its not compatible with 2008. Is their some kind of compatibility option in management studio that I don't know about?

  • No, you need to use the proper version. They both co-habitate on one computer nicely. SSIS 2008 is probably using a different .net framework (3.5?).

    From Sam
  • This isn't going to work. As SAM mentioned, SSIS 2008 is not backward compatible in SSIS 2005. SSIS 2008 packages require SSIS 2008 to be installed on the system in question. It can and does co-exist with SSIS 2005. The only catch is to watch your paths with respect to running from the command line, etc., as likely your SSSIS 2005 pathing is first in your PATH statement, meaning the SSIS 2005 executables will be executed and if it's a SSIS 2008 package will fail. We have systems where both versions of SSIS are installed and we've had to explicitly specify pathing on those systems.

    Your other option is to install SQL Server 2005 SSIS and BIDS on your system (meaning the VS2005 shell will be installed as well) and rebuild your package in 2005 and then deploy it. VS2005 generally behaves just fine installed side-by-side with VS2008.

How do I tell if apache is running as prefork or worker?

How do I tell if apache is running (or configured to run) as prefork or worker?

  • The MPM is configured at compile time. One way to figure it out afterwards is to list compiled in modules. That list will include the chosen MPM. The listing can be accomplished running the apache binary, with the -l flag.

    andreas@halleck:~$ apache2 -l
    Compiled in modules:
     core.c
     mod_log_config.c
     mod_logio.c
     worker.c
     http_core.c
     mod_so.c
    andreas@halleck:~$
    

    Here we find the module worker.c, hence I'm running the worker MPM.

    From andol

Could not continue scan with NOLOCK due to data movement during installation

Hi,

I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports).

Half way through the actual installation, I get a popup with this error:

Could not continue scan with NOLOCK due to data movement.

The installation still runs to completion when I press ok.

However, at the end, it states that the following services "failed":

database engine services sql server replication full-text search reporting services

How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing?

I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries?

Also, now that I have installed the app, when I login, I keep getting this message:

TITLE: Connect to Server

Cannot connect to MSSQLSERVER.


ADDITIONAL INFORMATION:

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476


BUTTONS:

OK

I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs.

Please advise on both of these errors. I think these two errors are related.

Thanks

  • Did you configure your server to accept external connections? BY default SQL disables connections from other machines.

    If that is not the case, try switching your connection method to TCP/IP as default for both server and client and see if you still get the error.

    Make sure that the MSSQLSERVER service is running. Check if you can connect to the SQL server using client tools on the server.

    From baldy
  • I'm getting the same error as well.

    In my case, I have a table that has ~6000 rows. A job inserts calculates new rows from other data source and inserts it into this table, making row count ~12000. Then the same job deletes the previous rows, to ensure several business requirements. ("truncate before / insert afterwards" is not possible in my case).

    And there are other queries that use select with nolock's to read from the table. I am guessing the error implies that just after SQL Server found which record to read from, the row has been deleted.

SQL Server Tuning Resources

Hi,

I want to learn more about tuning SQL Server instances (2005/2008), does anyone have any good resources they can point me at?

Dave

NB: I'm talking about the hardware/instance configuration side of things rather than tuning SQL queries

  • Some of the basics include using domain service accounts to run the instances to allow for replication and certain local policies to apply.

    These tips apply to a dedicated server. Consider the impact of these on a server running other services

    In Local Security, grant the service accounts that run the instances "Lock Pages in Memory" and "Perform Volume Maintenance."

    Lock Pages in Memory - This will allow SQL Server to maintain data in RAM instead of paging it if there is contention with another process.

    Perform Volume Maint - This will allow SQL to write on the fly without having to pre-allocate the space by writing zeros. This can increase write speed.

    Put your logs and databases on separate physical spindles if possible.

    Autogrowth should not be relied on for growing databases, it will cause fragmentation if left over time. If possible, databases should be grown by hand and autogrowth should only be relied on as a fail safe.

    From MarkM
  • Very open ended question!

    Microsoft as always is a great resource. Check out technet's SQL Server best Practices. Here are some things that are top of mind for me, that you'd want to explore:

  • Check out Brent Ozar's collection of information. Brent now works for Quest, which own SQLServerPedia, and there's more practical information there. You might also check the 24 hours of PASS sessions, like Andy Kelly's, which use wait stats to help you pinpoint where the issue might be.

How to change size of remote ssh session in my terminal?

I use Terminal app for ssh connection to my Ubuntu vps. Problem with size of remote terminal size, in don't fill my Terminal app window, only part. How to make it fill full terminal area?

  • What is you terminal application ? running on what OS ?

    Perhaps what you are looking for is:

    eval $(resize)
    
    From jlliagre

How should someone create an encrypted password for /etc/shadow?

I am setting up a new account on a Linux box for Subversion repository access, and can send the password to the new user. However, I think there was a command line utility for this new user to encrypt the password he likes into a format I can copy/paste directly into the /etc/shadow file.

What was the full command that this new user should run on the console (e.g. Bash) to create such an encrypted password?

UPDATE: the user will not be allowed to log in on the machine, and the account will merely be used for svn+ssh:// access. Therefore, the user cannot change it himself.

  • The format of the password in shadow can vary. You could set it to be MD5 or the good old DES3 or... You are good sending your user a password and forcing her to change it in the first login (# chage -d 0 username)

    From Gonzalo
  • Instead of having them encrypt the password and send it to you, why not just tell them to type:

    passwd
    

    It will do everything you want with the added advantage that they can change their passwords without any extra work for you.

    EDIT: According to this, there's supposedly a command called makepassword that you can get for Debian/Ubuntu.

    Daniel Pryden : Because that requires the user to already be logged in. The OP wants a solution to set the password securely *before* the user logs in for the first time.
    Brendan Long : It seems like randomly generating a password and having them change it when they log in is just as secure as having them generate a password and manually adding it.
    Egon Willighagen : The user will actually never login (shell:/bin/false), and only allow SVN read/write access...
    Brendan Long : You could set shell:/usr/bin/passwd :D
    Brendan Long : I mean that last comment as joke, but apparently it will work: http://markmail.org/message/ekuxvnhdagywy4i5
  • /etc/passwd and /etc/shadow are very easy to tokenize with the usual command line tools (i.e. grep, awk, sed, tr, etc).

    What becomes interesting is the actual password hash field in /etc/shadow, its prefix tells you how the password has been encrypted. From man (5) shadow :

    The password field must be filled. The encrypted password consists of 13 to 24 characters from the 64 characters alphabet a thru z, A
    thru Z, 0 thru 9, \. and /. Optionally it can start with a "$" character. This means the encrypted password was generated using another
    (not DES) algorithm. For example if it starts with "$1$" it means the MD5-based algorithm was used.
    

    How it was encrypted broadly depends on how old the installed OS happens to be. Its important to pay special attention to the second field in /etc/shadow.

    You should make every effort to follow whatever hash the system is using, be it DES, MD5, etc, since its so easy to detect.

    From Tim Post
  • Why not SU into to the user and run passwd?

    Egon Willighagen : Because I do not want to know the password... or have it send to me in an unencrypted form.
    From EsbenP
  • the user can execute on his computer something like:

    echo "password"|openssl passwd -1 -stdin
    

    and then send you the output.

    Antoine Benkemoun : +1 does exactly what you are looking for.
    Egon Willighagen : Why is that command giving a different value each time I call it?
    Gonzalo : The format of the output is $id$salt$encrypted. Different id and different salt give you a different encrypted string. The id is the algorithm used: 0-DES, 1->MD5, 2a->Blowfish, 5->SHA-256, 6->SHA512
    Egon Willighagen : OK, so I need to figure out how to trigger a certain id (which was the same all the time) and salt (which changed with each call)... makes sense.
    From Daniel
  • Is there a way to generate this passwords via command line? Yes, with debian package makepasswd (but only for MD5):

    echo "mypasswd" | makepasswd --crypt-md5
    $1$r2elYKyB$vUr/Ph.brKTldM2h2k8J5.
    

    But this will not work via copy and paste inside /etc/shadow To change password via script in some linux distributions, you can use:

    echo oracle:mypasswd | chpasswd
    

    or

    echo -n mypasswd | passwd --stdin oracle
    

Hyper-V - Why would .vhd expand when it still has unused space?

I have a dynamically expanding .vhd file. The current file size was about 50 GB, with only 29 GB used. However, it recently expanded to about 54 GB, but still only shows 29 GB used.

Are there any reasons that a .vhd file would expand when it still has plenty of unused space?

  • The .vhd file tracks changes to an assumed blank disk.

    Just because some areas of the filesystem aren't used, doesn't mean that they don't have shadow copies or leftovers from previously used files.

    This is, for example, how un-delete programs work.

    To clarify:

    The "unused" space on your disk may not be blank.

    Nathan : So if I understand you correctly - this is completely below the windows file system level, and is akin to individual magnetic bits on a physical hard drive. Is there any way to get a report on how much space is unused by the *filesystem* but still taking up space in the .vhd file?
    Nathan : Another question: why wouldn't the Windows file system on the virtual OS simply overwrite the existing bits in the .vhd file like it would on a physical hard drive?
  • In addition to Johns answer, you can regain space by running the pre-compactor application inside the guest - to zero out any unused space inside the vhd. And then run the shrink/compact on the vhd from your virtualization tool... then you'll know the real usage ^^

    The precompactor tool is bundled with most releases of Virtual PC and Virtual Server and can for example be found in:

    Program Files\Microsoft Virtual PC\Virtual Machine Additions\
    

    It will work for Hyper-V vhd as well regardless of where you sourced the precompactor iso from.

    Nathan : Where do I get the pre-compactor application?