Wednesday, January 26, 2011

What is wrong with my .Htaccess file? I'm trying to redirect permanently my whole site to the index.htm file

This is giving me a 500 internal server error. Any suggestions? I have tried various examples but I think I'm missing something...

RewriteEngine On
RewriteCond  %{request_uri}!^ /index\.htm
RewriteRule  ^(.*) /index\.htm [R=permanent,L]

It displays the homepage if I navigate there but anything that meets the conditions (all appart from index.htm gives the server 500)

EDIT: with the above code it now doesnt give any 500 errors but it doesnt redirect for any pages

  • You're not redirecting to /index.htm, you're redirecting to / which is different as far as Apache is concerned.

    Try: RewriteRule ^(.*)$ /index.htm [R=permanent,L]

    SocialAddict : please can you edit out the domain - i forgot :)
    SocialAddict : Incidently I tried that just before your post with the same issue :(
    SocialAddict : I'm using utf-8 as the .htaccess file type is that correct?
    SocialAddict : still doesn't redirect but displays the original (old) page
    Andy Shellam : What about capitalising REQUEST_URI? It might be case-sensitive, and in all the Apache documentation, those macros ARE IN CAPS (not shouting, just illustrating :-P )
    SocialAddict : tried that now too no luck
  • Your rewrite condition isn't split up well and your Not-StartWith is part of the previous parameter. :) You need a space after the %{REQUEST_URL} and before the !^

    RewriteEngine On
    RewriteCond  %{REQUEST_URI} !^/index\.htm$
    RewriteRule  ^(.*) /index.htm [R=permanent,L]
    
    From Amadiere

Recommendation for a non-standard SSL port

Hey guys,

On our server I have a single IP, and need to host 2 different SSL sites. Sites have different owners so have different SSL certificates, and can't share the same certificate with SAN.

So as a last resort I have modified the web application to give the ability to use a specified port for secure pages. For its simple look I used port 200. However I'm worried about some visitors may be unable to see the site because of their firewalls / proxies blocking the port for ssl connections. I heard some people were unable to see the website, a home user and someone from an enterprise company, don't know if this was the reason.

So, any recommendations for a non-standard SSL port number (443 is used by the other site) which may work for visitors better than port 200 ? Like 8080 or 8443 perhaps?

Thanks!

  • Using port 200 would definitely be an issue. My users wouldn't be able to see your site on that port.

    8443 is a good compromise. Being standard in Java environments, more professional environments will allow it. I suspect there will still be issues, however.

    SuperDuck : Ah great tip, thanks Warner. Yeah there may always be some issues, but at least it will give a better chance then. I heard the bank, who will check the website for virtual pos application was unable to access the site. Though I'm not sure if this was the problem, I'll try my chance with 8443 =)
    SuperDuck : great! just remembered we had some spare IPs, waiting for the support to adjust the firewall.
    From Warner
  • I present Server Name Indication over SSL. With this, you can have apache listen on one IP:port and browsers will send you the hostname before initiating SSL. All modern browsers support this, unless you're enslaved by IE6 for some reason.

    SuperDuck : thanks for your comment, jldugger. sni was my second hope, as we dropped the support for ie6. however I've noticed that it won't work for XP systems, even if you use the latest IE version. So I guess it needs at least 4 5 years to become usable.
    From jldugger

Does Gigabit degrade all ports to 100 megabit if there is a 100 megabit device attached?

Our company is buying some HP Procurve managed gigabit switches to replace some of our core switches. However, we aren't able to upgrade all of our switches from 100Mb to Gigabit switches. I think I know the answer but I'm not exactly sure. If we plug those 100Mb switches (or even a 100Mb device) into those Gigabit switches, will the performance of the entire switch drop to 100Mb or will just that one port work at 100Mb?

  • Just that port.

    hjoelr : Great! That's how I thought it worked, but just wanted to make sure.
    From mfinni
  • If you can avoid cascading switches, I'd strongly encourage it. The downsides are easily exampled with two people on the same cascaded switch copying a large file to a fileserver. Having a single user under normal usage being able to cause usage issues for multiple other users is obviously bad.

    Implications are even worse with servers.

    hjoelr : Yea, I totally agree with you. We are planning on removing the cascade switches when we get a chance to completely re-wire our 40-year-old building.
    David Spillett : Many Gbit switches have FP ports that would allow you to install ports to link between then, reducing (but by no means removing) the potential performance problems with cascaded switches by providing a faster (10Gbit) backbone for them to talk over. The fiber modules and related kit may be prohibitively expensive for you at the moment, but it might be worth getting switches that support this for future proofing.
    From Warner

Test a site with a static subdomain locally

How can I test a site that uses one or more static domains for serving images locally?

e.g.

  • domain.tld with images servered from static.domain.tld
  • Local working copy of the site on WAMP checked out from SVN: URLs will be pointing at static.domain.tld rather than static.domain.local
  • I'm not exactly sure what you're asking, but you could always put entries in your hosts file if it's a question of resolving static.domain.tld, domain.tld, etc.

    %systemroot%\system32\drivers\etc\hosts:
    
    static.domain.tld   127.0.0.1
    someotherstatic.domain.tld   127.0.0.1
    
    From gravyface

Where to get the NT option pack

Hi,

Where do I get the NT option Pack? All the download links I could find are down... Does somebody know where I can still find it? I have access to MSDNAA, but I couldn't find anything there...

Thanks! Yvan

  • Any reason you don't move to a newer version of Windows instead? Was this NT 4 or 3.51? It's been a LONG time since anyone referred to the NT Option Pack..

    Steve Radich-BitShop.com : Unless you are running ISAPI routines the upgrade all the way to IIS 7.5 (Win 2008 R2) shouldn't be too bad - The main differences are security options that the default values changed - Classic ASP hasn't changed all that much over the years if that's what you are using. I don't know if idc/htx files are still supported, that may be the only true "missing" feature from the oldest IIS.

Output the number of cores and speed of a server?

I have access to another college's standalone server and am running several experiments on it. However, I don't know how many cores or the speed of the cores in the machine. Is there a way to get that information through the command line? Right now I'm accessing it through SSH.

  • cat /proc/cpuinfo
    
    tylerl : You beat me to it.
    John Gardeniers : That's assuming it's a *nix system.
    Dennis Williamson : ...that uses the `/proc` filesystem.

An alias (defined as a hosts entry) for a fileserver works only when it is a win2008-box, why?

On a Terminalserver (win2008), we like to have an alias for the real fileservername and therefor we put a line in hosts like:

  • 192.168.0.10 BigFiler

This only works fine for Fileserver under windows 2008 not for win2003-servers, why?

We do not have defined a CNAME alias that is created in the DNS zone!

  • Without more details in your question, I'm assuming you have this issue:

    http://support.microsoft.com/kb/281308

    Which basically means that the W2K3 server isn't listening for requests to the alias name.

    Ice : We do not have defined a CNAME alias that is created in the DNS zone! There is only a hosts entry on the client computer from were we want to access the fileserver. The kb281308 doesn't apply to this issue, i think.
    joeqwerty : It does. It doesn't matter whether you have a CNAME record in your DNS zone or an entry in your hosts file. The file server doesn't know that it should respond to the name in the hosts file. 1. The file server has a computer name, let's call it Filer. 2. On the terminal server you have an entry for Filer called BigFiler. 3. Filer doesn't respond to the name BigFiler because it doesn't know that it should. 4. Follow the KB article to make Filer listen for requests to the name BigFiler. 5. Test.
    Ice : ok, but the terminalserver converts any access to BigFiler to the IP-Address of Filer as defined in his hosts-file, or not?
    joeqwerty : At the network layer yes, but at the application layer the request is still going to the "alias" name, which the server won't respond to, unless you follow the steps in the KB.
    From joeqwerty

How to Enable Ports 25 - 28 on a Cisco Catalyst 3750

I am trying to enable ports 25 - 28 on my 28 port Catalyst 3750. These four ports are my fiber ports. I am using the following command to bring up that interface.

interface range Gi1/0/25 - 28

That works and it dumps me in the config-if-interface prompt. This is where I get stuck. I just want to enable these four ports and have them be in VLAN1 and On just like ports 1 - 24.

How do I do this?

  • First of all, do a show running-config interface GigabitEthernet 1/0/X and have a look at how those interfaces are actually configured.

    Then do what is needed:

    • If they are in shutdown state, issue a no shutdown command.
    • If they are not in the right VLAN, issue a switchport access vlan X command.
    • If they are configured for something else than standard access (i.e. trunking), clear their configuration and reconfigure them.
    Webs : Could also do "sh int status" for a quick view look at all interfaces. But Massimo hit the nail on the head with his answer.
    Jared Brown : I issued a "no shutdown" to the interface range Gi1/0/25 - 28. Although when I back out to the non-enabled prompt and do a "show interface Gi1/0/25" the output states that Gigabit ethernet 1/0/25 is down, line protocol is down (not connected)". Ports 25 - 28 are still not connected.
    James Sneeringer : This may be a silly question, but are they actually connected to something? The fact that they say "down" and not "administratively down" means they are enabled. The switch just doesn't see anything connected to them. Do you have SFP modules inserted? The output of `show int Gi1/0/25` (or 26-28) should have a line showing the media type, such as `media type is SX`, or `media type is Not Present` if there is no SFP module installed.
    Massimo : Agreed. "Line protocol is down" states clearly that the switch thinks there isn't anything connected to those interfaces, so you should have a look at cabling and at whatever is at the other end.
    From Massimo
  • Do the ports require GBICs (and, if so, do you have GBICs installed)? Do you have cables attached to the ports? If you're using fibre, you MAY have to swap the connectors around (I don't think this is possible for SFP connectors, so you'd have to have a cross-over cable or a connector that allows you to connect RX on one cable to TX on the other).

    Massimo : You've got quite confused ideas, haven't you?
    Vatine : Not necessarily. Off-hand, I don't recall if ports 25-28 require GBICs or not (if they're GBIC ports, they'll just be slots in the chassis). I guess I could check it, but... Missing cables happen, a lot, especially when different people deal with the physical install and configuration. Fibres being the wrong way around is painfully common.
    From Vatine

What system of administrative e-mail addresses does your organization use?

I'm getting ready to request a new batch of administrative e-mail addresses to replace an outdated hierarchy within my organization. I have the opportunity of choosing new aliases for 24/7 alert recipients, monitoring recipients, all team members, etc.

What does your org use for these purposes?

Groups in my org use things like: org-dept@, org-dept-all@, org-dept-alert@, org-dept-monitoring@, org-dept-status@.

TIA!!!!!111

  • It depends on the conditions; I worked at a large multinational company and we had to go a stage further than that, because we had org-dept-project@location.company.net, and that had to go to a subset of org-dept-project@company.net, and all mail had to go the other direction.

    (Somehow it all worked great; we actually dropped the org names, and when we were talking about a specific project, it went to projectname@company.net, and if it had to go to a whole department, it was dept@company.net)

    To actually answer your question, for alerts we went for the syslog standard; our mail system supported *-emerg|alert|crit|err|warn|notice|info|debug@company.net

    (actually, dunno who added it in but it was great; full names were expanded, so that *-emergency would redirect to emerg, etc)

    Ali Nabavi : Thanks a lot, Andrew! That is very helpful. I never thought about using syslog's convention. Your "org versus project" issue is something I'm grappling with -- sounds very dramatic! -- now. I think I favor your solution for this also.
    Andrew Bolster : When i started at $COMPANY they had an almost reversed system of what your talking about; project-dept-org-location@company.net, but then switched to a more 'free' convention of project names and departments @company.net. To my knowledge once it was set up and the allocation of names standardised, it actually reduced the mailadmin's workload (only spoke to him at xmas party so his speech and my hearing may have been impaired) Glad to help!

Setting permissions on user accounts

We would like to lock a couple of accounts to prevent even domain admins from resetting the password without already knowing the current password. From what I can see in the permission sets, this looks possible. Anything I've found on the subject recommends against altering default permissions, but doesn't go into detail why.

Assuming that domain admin retains the ability to reset passwords without knowing current passwords is it reasonable to prevent password resets on the domain admin account and maybe a couple of others? If not, why not?

  • It should be possible with ADSI edit. (I know you know, but: Be careful with ADSI Edit).

    But the real problem here is that you have to be able to trust the "Domain Admins". If you can't trust them with resetting passwords on important accounts, how can you trust them not to format the DCs?

    If you want to keep some accounts Special, move them into an OU and call it "ImportantAccounts" and tell all your domain admins not reset these passwords!

    You can however centralise security auditing and keep track of who reset which password if you want to apportion blame after the fact.

    Ron Porter : First of all, ADSI edit is off limits! Second, trust would seem to be at the heart of the issue. In most areas, I like to work with a 'trust but verify' model. That suggests that we should leave things as they are, but adding proper auditing and reporting.
    Ron Porter : John Gardinier's comment to my question addresses the technical reasons why this tactic cannot succeed and this answer raises valid concerns over the organization reasons for developing this strategy. If the two were merged into one answer, that would be ideal, but this is the only actual answer and problem has been fully addressed to my satisfaction.
    Seanchán Torpéist : On a note about trust: We had summer students in here installing software on PCs and we made them normal "Domain Users" but put the "Interns" group into "Local Admins" of each desktop PC using a script. So they could mess with normal PCs but couldn't log into Servers or use AD.

HDD Carrier, like a soda carrier available at McDonalds?

We use external USB drives for backups, and they have to be stored offsite at the end of the week. Right now we have your standard external USB drive inside an enclosure. We were thinking about moving to a USB dock, and dock a bare HDD for backups, rather than having various sized and types of enclosures. If we were to do this, the drives need protection while being transported to/from the safety deposit box.

Is there any kind of hard drive carrier that would let us slide two drives into it, and it would provide protection while the drives are carried around by non-technical people? I'm afraid such a product doesn't exist, but perhaps someone knows of something?

  • Even with a "shock resistant" chassis and a carrier designed for abuse, you are going to introduce additional risk by regularly transporting hard disks. They are mechanical devices and regular transportation will increase risk of failure.

    For off-site storage, I would recommend seeking an alternative media to hard disks, such as tape. Alternatively, the data could be transferred via a network to the hard disks located at a different facility.

    To answer your question specifically, I would recommend searching around a vendor such as Newegg. Depending on your budget, there's a variety of disk chassis. Consumer grade is going to be substantially cheaper than more commercial solutions. Your protection would probably be best done with a padded case, which is the approach companies like Cintas often take, at least with tape transport. There are also protection products that could potentially help.

    Bart Silverstrim : There's a risk, but for most drives you should get enough use from them before failure that they pay for themselves. And most modern drives are capable of parking in a way that the risk is minimal from shock. Bigger threat comes from temperature changes (hot car?), theft, etc...things that any transportation of sensitive data on physical media can introduce. I'd pretty much figure on a limited lifetime for using external drives as backups in rotation, and too many people use tapes and not bother to check that they're still "good" for saving data to until it's too late.
    Warner : I prefer disks but not for transporting. Good points, I don't disagree.
    Jason Taylor : We have four drives for backup, two are always in the safety deposit box, the other two are being used. Every friday we switch them out. I'm not too afraid of the drives dying from frequent transportation, as the chances of all four dying at once are rare. Plus we have the backups on network storage here in the building, so we would have to lose four drives, and all the original (RAID'ed) backups.
    From Warner
  • Yes it does- kind of. We use a plastic molded "thing" which the 3.5" Sata hard drive goes into, which gives it some protection. Will look for a link for you now.

    AliGibbs : http://www.ebuyer.com/product/164300 something like this (although this is for 2.5")
    From AliGibbs
  • Have you tried looking at the different Vaultz cases? I use one for a netbook with a bit of egg-carton foam to fill in gaps and keep things from shifting. works great for protecting things since it's a hard case to prevent bangs and light weather from affecting the contents, and if you cut some foam to fit your contents it should keep things from sliding or banging around.

  • If you're using raw harddrives, pelican makes cases: http://www.casesbypelican.com/hdrives.htm

    If they're in enclosures, you could just use a little food cooler.

    Jason Taylor : These look great, but a bit pricey, and we only need two, not four. Thanks for the info though!
    bacteriophage : I only meant it as a general idea; I'm sure you can find something similar or make your own - it's only plastic and foam.
  • If you stop by a photography shop, they'll kit you out with a sturdy, steel brief case with a customizable foam insert that you can cut out to fit your drive(s). If it's good enough for cameras and lenses, I think it's good enough for HDDs.

    From gravyface
  • We use Turtle Cases from Perm-A-Store for our tapes and they also make cases for drives similar to this:

    Turtle HD Case

    From Doug Luxem

WSS_Content contains?

WSS 3.0:Windows2003

I have a content database that keeps growing for the name of simply, "WSS_Content"

This database is aside from all the other content databases that are linked to an web application, but located in the same directory. I count 5 CONTENT databases on this directory, but only 4 web applications (excluding the centraladmin). Trouble is it keeps growing in size and I need to know what it is and why its growing. Is this a default database of some kind? Where and why would it grow?

I recently found, through Central Administration, that one of my sites has a content database name of

"WSS_Content_(random numbers and letters)"

whereas, the other content databases would have a name like

"WSS_Content_(WebApplicationName)"

What gives?

  • An easy way to see all your databases (and what web application they belong to) is to go to Central Admin -> Operations -> Perform a Backup.

    This will give you a tree view of your farm with all of the databases listed.

    Mike : There is nothing linked to WSS_Content though..hmmm...
    MattB : @Mike: then you have something wacky going on. (That is the technical term...) I would probably run a SQL Profiler trace to see where the connections to WSS_Content are coming from.
    Mike : WSS_Content was referencing a MOSS Web App (different server), that's where I got confused...
    From MattB
  • I believe that SharePoint just puts a string of random numbers and letters after the WSS_Content db if you don't provide a name upon creation. Our SharePoint site was originally set up by a contractor and the default content db has the same random string you are talking about. I was not involved in the setup so I'm not sure what was done but my new web applications have proper names.

    This technet blog covers the topic of sorting them out.

    Edit: After reading through some of the subsequent articles on the link I provided it turns out that the admin content database doesn't give you an option to name it so it would automatically just be given a GUID. This could very well be the database you are seeing.

    Mike : That a good reference, but I just want to know what WSS_Content is, and how can I downsize it without SQL, example being, expiration policy.
    Dynamo : I'm assuming that your WSS_CONTENT database is the one that's created upon installation of SharePoint. That's the most likely one to not have been named. I've found the databases for SharePoint do tend to grow a lot and I haven't found much I can do to limit it. My one suggestion would be to ensure that you're taking log backups. If you're not the log files will grow to be very large. Regular log backups won't make your database any smaller but it will clear out the unused space in your log so it doesn't grow any more than it needs to.
    From Dynamo

TAR command to extract a single file from a .tar.gz

Does anyone have a command syntax for extracting 1 file from a .tar.gz that also allows me to place the extracted file in a certain directory? I have Google'd this and get too many variations with a lot of forum threads stating the syntax doesn't work. Before I venture on I prefer to know the command will work because I do not want to risk overwriting files and directories already present on my server.

Thanks.

  • If you're risking overwriting files in production with uncertain results, you're doing something wrong. Test first.

    tar zxvf archive.tar.gz path/to/file/in/archive -C /destination/dir

    egorgry : +1 if this doesn't work it's not a tar.gz file.
    Kyle Brandt : If What egorgry says turns out to be true (it is not a gzip file), see what `file foo.tar.gz` thinks it is... it could be a mislabed bz2 file or something, in which case you would just replace the z argument to tar with j.
    Dr. DOT : Thanks Warner -- it worked. Although the -C /destination/dir part did not work I got what I needed. The file ended up in a series of directories off the directory where the .tar.gz was located so I did not overwrite anything. (BTW, going in I knew and verified I was working with a .tar.gz. That was not an issue. I was simply looking for trustworthy syntax and I always trust what I get from stackoverflow and serverfault experts!)
    From Warner

Is it possible to nest VPN connections?

Is it possible to create an VPN connection while using site-to-site VPN?

Have you ever tried or done this?

What are possible problems, pitfalls, things of the impossible, etc. ?

Also see: http://serverfault.com/questions/126537/is-it-possible-to-use-microsofts-secure-connection-rules-ipsec-with-vpn

  • You're likely to start having issues related to MTU, but yes, it can work.

    Whether it will work depends on what exact combination you're hoping to achieve.

    From LapTop006

Associate file extension to application running in XP MODE

Hi

I'm running a legacy application in XP mode under Win7 Professional. The application installs, runs and publishes fine (appears in win7 start menu by itself after install).

However, when I want to associate a file extension from Win7 so that it opens automatically in that application, I don't have it on the list of available applications, and i found no way to add it there.

Anyone knows how it can be done?

I've read about associating file extensions with remote apps on TS2008, but there it's done by setting the associations in the MSI that is built on the server and used to install the app on the client. Here I have no such tools.

Help would be appreciated!

Vadim R.

  • Take a look at the ASSOC and FTYPE commands that you can run at the CMD.EXE prompt. Here is an example from HELP FTYPE:

    ASSOC .pl=PerlScript
    FTYPE PerlScript=perl.exe %1 %*
    

    I don't know if this will work with the XP mode in Windows 7, though.

    V. Romanov : Surprisingly, it works :) It didn't work at first and I was a little confused, but it turns out the stupid application i was trying to run has two executables. One runs the main UI (and it's the one published as the XPMODE app and another .exe that is used to run the report files, which is not published. As soon as i published the second .exe and associated it with the extensions via the commands you posted, it worked. Thanks!

SSH use only my password, Ignore my ssh key, don't prompt me for a passphrase

This is a question regarding the OpenSSH client on Linux, MacOSX and FreeBSD.

Normally, I log into systems using my SSH key.

Occasionally, I want my SSH client to ignore my SSH key and use a password instead. If I 'ssh hostname', my client prompts me for the Passphrase to my SSH key which is an annoyance. Instead, I want the client to simply ignore my SSH key, so that the server will ask me for my password instead.

I tried the following, but I am still prompted for the passphrase to my SSH key. After this, I am prompted for my password.

ssh -o PreferredAuthentications=password host.example.org

I want to do this on the client side, without any modification of the remote host.

  • Try ssh -o PreferredAuthentications=keyboard-interactive -o PubkeyAuthentication=no host.example.org

    In ssh v2, keyboard-interactive is the way to say "password". As well, the -o PubkeyAuthentication=no should mean to not even try ssh key auth.

    Stefan Lasiewski : And in fact 'ssh -o PreferredAuthentications=keyboard-interactive host' also works. I was thrown off by SSH_CONFIG(5), which still mentions the 'password' keyword. Thanks for the clarification.
    grawity : Correction: In SSH v2, **both** `password` and `keyboard-interactive` are valid, and they are different things. (`password` requires a password, and `keyboard-interactive` can technically be anything.)
    From Bill Weiss

Using Chinese Characters With Mod_Rewrite

I'm trying to create a rule using Chinese characters

#RewriteRule ^zh(.*) /中文版$1 [L,R=301]

creates error 500 when i change the file to UTF-8

#RewriteRule ^zh(.*) /%E4%B8%AD%E6%96%87%E7%89%88$1 [L,R=301]

redirects to /%25E4%25B8%25AD%25E6%2596%2587%25E7%2589%2588 (basically replacing % with %25)

Anybody familiar with this problem?

  • There is a whole page dedicated to this issue including solutions:

    http://www.dracos.co.uk/code/apache-rewrite-problem/ (fyi-noi: Google "apache escape" -> 6th hit)

    Moak : similar but not same problem, meaning It's about the Chinese characters. I don't really want all these %percentage signs
  • Using notepad I changed the encoding to "ANSI as UTF-8", rather than UTF-8. This made everything work as expected.

    From Moak

Does anyone know about nagios plugin that uses nmap and does port checking??

Hi to all.

I need to monitor open and closed ports on dozens of hosts. I've found a Nagios plugin that does what I need, but I would have to use this script through nrpe.

Some of the hosts are powered by linux and they all have perl installed. But some of them are Windows machines, and it's not convenient for me to install perl on every one of them. That's why I can not use this plugin.

I hope that there's Nagios plugin that uses nmap, or something similar, so it could check ports on every host remotely, without installing plugins on remote hosts, only on server.

  • What do you mean to check ports on hosts remotely? Do you just want to connect to the port to see if it is open? The check_tcp plugin will do that, if, that's what you want to do.

    Not quite sure what you mean.

    Eedoh : Well yes, I want to check for open and closed ports, but I need info for all of them, and I need to get warnings when state is changed. And, before all other things, I have to be able to run checks without plugin installation on remote hosts. check_tcp is not able to scan ALL ports on every host. At least I don't know a way to do it (except creating new command for every port, and that's too much, I'd rather make my own plugin :D) :D
    Warner : What lmo suggests is absolutely the correct way to do it. You should be making a check for separate things, not writing a flakey check that will product inconsistent results. check_tcp is the proper way to check if a socket is open or closed.
    bread555 : I have to disagree. From a system Administration point of view, perhaps. From a security point of view, I often run point scanners and compare them against a baseline. I also don't quite know what a "flakey check" would be... Seems like a pretty simple check really. Have it do a nmap scan for each host. write to temp, compare against baseline. Error 0 if no changes, error 2 if there is.
    Imo : Yeah, if you're worried that the machine has been compromised and a backdoor port has been opened a port open count/check would be useful. Infact I recall writing a small nagios plugin for that many years ago. The initial poster is a bit confused... to check a port you don't need to install nrpe or perl on remote machines. Nagios and check_tcp will check the TCP port status on as many remote machines and ports as you care to configure.
    Eedoh : That configuration of wanted ports for performing check on them is a problem. I need to monitor ALL ports on ALL hosts. With check_tcp I would have to write 65535x4 configuration lines/host, because I need to specify every port with new command with check_tcp. That's something I don't want to do However,I started writing my own plugin that uses nmap and gets port range as a parameter. Because Im in hurry, I will do only basic functionality I need for now, but when I finish my tasks in few weeks, I hope I will improve it and upload on nagios plugins exchange. Maybe even put a link on it here...
    From Imo
  • i really like nagios. have been using it for years. i even do some oracle database management with it, but what nagios really is is an availability monitoring tool. i think what you are asking for is better fulfilled by another software like openvas or snort.

    Eedoh : Yes, I was already suggesting snort to my chiefs, but they did not agree for some reason. However, meanwhile I wrote my own plugin for monitoring changes on desired range of ports, using nmap. I'm thinking of uploading it to nagios exchange, but it's still rough, it needs some polishing... Maybe I upload it now and update it with new version once it's totally finished (once i have free time:D).

AVG 9 (Internet Security Business Edition) and IIS 6.0

Are any of you using AVG 9 Internet Security Business Edition along with IIS 6.0 and if so have you experienced problems?

We just went from 8.5 to 9.0... Luckily I tried on only one of the servers in the web farm first to run for a week to make sure it played well with my servers. A few hours after the install all web apps were giving a "connection refused" error. Neither iisrest or restarting world wide web publishing services resolves the issue, only rebooting the machine brings the webs back up. They are all ASP.Net sites by the way (v2.5). What's interesting is if I take the machine out of the load balancer, the machine runs fine and the webs are just fine for days... as soon as i put it back in the pool it's only a few hours before its sad.

The only thing I can think of right now is that the Resident Shield may be causing an issue, any thoughts?

  • Have you tried temporarily disabling the resident shield to find out if that's the problem?

    Did you install the AVG firewall along with AVG9?

    Are there any logs that AVG generates that you can look through?

    Dave Holland : I should have added more specifics to the original post... no there is no firewall installed - I know why you'd ask that but no.. I'm smart enough to not miss that one ;). I want to be able to monitor during the day (normal business hours) so I plan on turning off resident shield tomorrow morning and trying it - however I really don't want to run without it. I'm wondering if I need to exclude IIS related directories and my application directories from it's scans.
    Dave Holland : Oh also yes there are logs, however nothing that leads me to believe anything is "out of the normal" for it.
    Dave : Do you notice any kind of unusual activity (cpu/mem) with the real time scanner process? AVG doesn't sell a product specifically for servers and I have had more than a few problems running on Win2k3 (perfectly fine on XP/Vista/7 though). I'm wondering if it sees your IIS activity as virus-like and blocks it. You could also try using a tool like Process Explorer to see what files the AVG executables are looking at.
    Dave Holland : The cpu and memory were all low, as soon as the machine's IIS went "down" activity on the box drops to pretty much zero. I can see where it could think it's virus-like activity but it sure seems like it should be TELLING me that, know what I mean?
    Dave : Have you tried excluding IIS directories from Resident Shield and/or totally disabling it? Also, what about setting IIS's max connection count very low? I wonder if it's the number of simultaneous connections that scares AVG.
    From Dave
  • The only thing I can figure is that something went wrong on the during the install of AVG. After removing it entirely and re-installing it everything worked fine.

Windows IIS test server setup

hello everyone,

I picked up a new server to do some testing and need of a little help in setting up my environment at home.

Here is what I would like to do: The test server will be used to test new code and configurations for a SaaS product. I would like from my laptop to enter www.acme.com and have it hit the server. The server is connected to a wireless router.

I have windows server 2008 with IIS running on an an IP of 192.168.1.4.

What is the best way to set this up? I want to hit the test server for www.acme.com and not go out to the internet.

Do i need to mess with the LMHosts file?

Thanks for the help. Im sure its easy but have never done this before.

  • Open the hosts file on the Client PCs in C:\WINDOWS\system32\drivers\etc with Notepad and below the line:

    127.0.0.1 localhost

    add another line with:

    192.168.1.4 www.acme.com

    Your browser should check this file before it does a DNS check and use the given IP address.

    Note: This will redirect all traffic for www.acme.com to that server, so if you need access to the actual website you can add a # before the line in hosts to switch it off.

    chopps : this will be done on every machine accessing the server correct? Do i need to do anything on the server if i want to access the site?
    Grizly : Upped, but make sure you are doing this on YOUR Machine, not your server.. (do the same on the server if you want to access it from there too)
    Seanchán Torpéist : Absolutely on the "client" machines. Server only if you want access like Grizly says. If you have a few test clients (5+ maybe?), then a DNS solution might be a better fix for the long term. Easier to change IP and so on
    chopps : so i can browse 192.168.0.4 and the default page comes up but when i do this it still goes out to the net. 192.168.0.4 acme.com ideas?
    Seanchán Torpéist : If you are browsing to www.acme.com, that also needs to be in **hosts** file. If you go to the command line and **ping acme.com" the reply should be 192.168.0.4. If not, your PC is not checking the host file or the host is not formatted correctly.
    Seanchán Torpéist : Try adding **127.0.0.1 crazyname** to a new line in the host file and then ping **crazyname**. The replies should be from **127.0.0.1**.
  • Pretty easy to do - set your laptop to use your W2K8 server as its DNS server. Enable DNS on the server and create a CNAME record for www.acme.com.

    Your server will then intercept www.acme.com DNS requests and serve up its own IP address.

    LMHosts is or rather was used for WINS, not relevant in this scenario.

Looking for 'WinHlp32.exe compatible' replacement for free redistribution under vista and windows 7

Our software installs a package of legacy software for the client, some of it has old hlp file from 3rd party vendor requiring winhlp32.exe (note: we have no legal right to modify the hlp). Those client may only have cd/dvd and might not have internet access, etc. So I need a free 'WinHlp32.exe compatible' replacement for our redistribution under vista and windows 7.

Background of problem: -Microsoft stopped including the 32-bit Help file viewer in Windows releases beginning with Windows Vista and Windows Server 2008. -Starting with the release of Windows Vista and Windows Server 2008, third-party software developers are no longer authorized to redistribute WinHlp32.exe with their programs. http://support.microsoft.com/kb/917607

  • Can you convert the WinHelp files to CHM? There's a few tools out there to do that, believe that MS even provides one. The existing HLP file would not be modified, but you'd have a Vista/Win7 compatible copy in a current/maintainable format.

    Here's a Yahoo! groups that (surprisingly) still looks busy and focuses on WinHelp tips/techniques/discussion: http://groups.yahoo.com/group/HATT/

    Also, any chance that these .hlp files are 16-bit? Apparently (reading your link) Win7/Vista still ship with winhelp.exe.

    richardboon : Nope (sorry I was not clear), we have no rights to extract and/or recreate the hlp in a new format. Moreover the legacy program itself, looks for the hlp to launch as help.
    gravyface : You might have to write one yourself; there's a couple of utilities and libs on Freshmeat.net that can extract the contents of the .hlp files; it's GPL, you could re-use a good chunk of it: http://freshmeat.net/search?q=hlp&submit=Search
    From gravyface

Cloning an LVM disk with dd?

I would like to clone a smaller LVM-formatted disk onto a larger one using dd, and boot that disk in the same machine. Do I need to make any special considerations for LVM?

Thanks! Although I considered the cool on-the-fly migration, where a second drive is added to an LVM volume and then LVM is told to remove the original drive from the volume, I decided my system would be much more likely to boot (and fully backed up, on the original disk) if I simply cloned the disk with dd, moved the new drive over to the first channel, removed the old drive, booted, added another partition in the new free space, added that partition to the original, smaller LVM volume, and used resize2fs to make the new space available to the filesystem. This worked great.

  • If a block for block copy, no. You'd later extend the volumes using the unused space.

    joeforker : I won't even confuse LVM if I leave the original disk in the system?
    Warner : No but you'd have to be considerate of your BIOS boot order if things don't perform as expected.
    Warner : Additionally, trying to boot the secondary disk 'sdb' using the installation configured for the primary 'sda' would not boot after copy. For the initial dd, it should be fine. Otherwise, the boot disk would have to be on the same channel as before to operate as expected without changes.
    From Warner
  • Having the same volume group name will confuse LVM. Make sure you change the original's name with vgrename if you keep the original disks in the system.