Thursday, January 20, 2011

How can I start Fedora Directory Service with SELinux enabled?

I just did a fresh base install of fedora 12, and did a yum install 389-ds. I went through the included setup script (setup-ds-admin.pl) and everything started fine and was working normally. I could access the directory server and login using the directory manager account created during the setup.

After a reboot I tried starting the dirsrv service using the following command:

[root@test-ds ~]# /etc/init.d/dirsrv-admin start
Starting dirsrv-admin:                                  [  OK  ]
[root@test-ds ~]# /etc/init.d/dirsrv start
Starting dirsrv: 
test-ds...
[26/Feb/2010:14:59:11 -0500] dse - The entry cn=config in file
/etc/dirsrv/slapd-test-ds/dse.ldif is invalid, error code 53
(DSA is unwilling to perform) - nsslapd-errorlog-mode: Failed to chmod
error log file to 600: errno 1 (Operation not permitted)

[26/Feb/2010:14:59:11 -0500] dse - Could not load config file [dse.ldif]
[26/Feb/2010:14:59:11 -0500] dse - Please edit the file to correct the
reported problems and then restart the server.
                                                           [FAILED]
  *** Warning: 1 instance(s) failed to start

If I turn off SELinux with "setenforce 0", it can start without any issue. There are no entries generated in /var/log/audit/audit.log like I'd normally see for an SELinux error but it's infinitely repeatable turning SELinux on and off with setenforce.

  • This is a known bug.

    Try updating your SELinux Policy to minimum 3.6.32-59.

    TrueDuality : Thanks that was it! I tried searching for known bugs but I didn't stumble across this one because I assumed it would still be marked as 'open'. Oh well, Thanks again!
    From Studer

How to convert a really big HTML file to PDF in Windows

We have a few really large HTML files (60-100 MB) that we cannot convert to PDF with any reliability.

Adobe Acrobat 9 crashes - hits the 2GB limit for applications.

Open Office converts, but removes some of the anchors ().

ActivePDF webgrabber crashes.

Is using a 64 bit situation an option for this type of thing?

I see a bunch of options out there, but can they do better than Adobe Acrobat 9 itself?

  • You could try FoxIt's PDF creator. It's only $30 and they have a trial so you can see if it will do the job. Their reader is way better than Adobe's in my opinion so I would imagine that their writer is based on the same engine.

    Other free options that you could try are pdfcreator or pdfill.

    PeterStrange : Thanks, I tried it and after 20 hours it did the job from IE. However, no links or named destinations were created so that is a no go.
  • Depending on the use case and if you could even display the HTML code in a viewer, you might think about PrimoPDF. "Print-to-PDF" technologies might not necessarily be ideal but could lessen the size burden.

    From Mikey B
  • http://sourceforge.net/projects/pdfcreator/ for free

    From raerek
  • Know anyone with a Mac? if so then get them to open it with Preview and print to a PDF.

    From Chopper3
  • Why are the HTML files so large - are they files you obtain from a third party, or are they generated by something inside you organisation? Could you write a script to split the HTML files up in to sections? Do they have links to images, could your script reduce the quality of the images to reduce the file size?

    PeterStrange : It's just a large document. I could split it up but then the interlinking would be a problem. No images to speak of. :(
    David Hicks : Write a script to convert the HTML to PDF yourself, using something like Python and ReportLab?
  • I've printed pretty lengthy web pages to PDF using PDF995. Certainly not as large as you're talking about, but it worked very smoothly for me.

    PeterStrange : Thanks, I'm trying this one next
    From hometoast
  • The only way I could solve this problem was to convert pieces of HTML to PDF, then merge them in Adobe Acrobat 9.

    Thanks for your suggestions. Really interesting learning about all these neat tools that are out there.

  • I know this might be slightly heretical.... could you: copy and paste it into word then use word to fashion the html document. Then use word again to save the document as pdf? Just an out of the box idea.

    PeterStrange : No go. Too big for Word, apparently!
    From lilott8
  • If you are having this problem, try using HTMLTidy to clean up the HTML. That got the size down by a lot and made things easier to work with!

  • I wonder if you could use Winnovative's HTML to PDF converter?

    From JMan
  • Have you tried wkhtmltopdf? Its a command line utility that is super easy to use wkhtmltopdf:

    Install wkhtmltopdf
    Go to Start -> Run -> cmd

    cd %Program Files%\wkhtmltopdf                     [press enter]
    wkhtmltopdf.exe http://www.google.com google.pdf   [press enter]
    

    viola. google.com saved to google.pdf.

    If I remember correctly it does fairly well with its PDF compression

Do CNAME DNS entries affect SEO?

I am setting up a website for a client which I will be hosting and maintaining, and I am trying to determine what will be the most pain free setup. My thought is to do the following:

1) Set up the clients domain at client.mydomain.com 2) Have the client update their dns records for *.client.com to CNAME client.mydomain.com

From a practical perspective, this feels best for me because then I don't have to maintain access to their DNS control panel especially if they want to do other things with it. Furthermore, if I decide I need to move the website to a different host, I can do it seamlessly without their even knowing just by updating where client.mydomain.com points.

This being said, my concern is that I will affect the SEO for client.com. If I impact SEO at all, I will just go with the A-Record route and call it good...but if there is no impact, this seems nicer to me.

Thoughts?

(As an aside, I do have google maps included in this app, and the API key is yelling at me when I hit the app from a different tld...client.mydomain.com and client.com. Is there a way to ask for a single google api key that can work with cname'd domains? I can work this out in code, but didn't know if there was a simpler solution.)

  • Hi there, welcome to this site! :-)

    Using a CNAME does not impact search engine ranking. It is common practice to use a CNAME. A classic way of setting up DNS is to have (servernames).company.com and then create (servicenames).company.com as CNAME's pointing to server names (where service names are fx "www").

    If your DNS setup is particularly convoluted and it takes a looong time to resolve your DNS hostnames, then you could maybe rank a little lower with services like Google Adwords. Adwords takes the site 'quality' into account, and speed is a part of this. Generally DNS lookups are quick, so I think this is highly unlikely to be an issue.

VMWare-Mount not recognizing virtual disks

I have two disks as .vmdk files, and four as .vdi files. I can boot virtual machines on them with Sun xMV VirtualBox, and they work just fine. However, I want to mount them on my local computer so I can read some files off of them without starting a virtual machine. I downloaded the vmware-mount utility, but I get this error, even when mounting .vmdk files, which should be VMWare images...

Unable to mount the virtual disk. The disk may be in use by a virtual
machine, may not have enough volumes or mounted under another drive
letter. If not, verify that the file is a valid virtual disk file.

Thinking it's a problem with the utility, I downloaded the SDK and made my own simple program in C to try to mount a disk. It just initializes the API, connects to it, then attempts to open the disk. I get this error, once again claiming it is not a virtual disk:

**LOG: DISKLIB-DSCPTR: descriptor above max size: I64u
**LOG: DISKLIB-LINK  : "f:\programming\VMs\windowstrash.vdi" : failed to open (The file specified is not a virtual disk).
**LOG: DISKLIB-CHAIN : "f:\programming\VMs\windowstrash.vdi" : failed to open (The file specified is not a virtual disk).
**LOG: DISKLIB-LIB   : Failed to open 'f:\programming\VMs\windowstrash.vdi' with flags 0x1e (The file specified is not a virtual disk).
** FAILURE ** : The file specified is not a virtual disk

The files are clearly virtual disks, though, since I can actually mount and use them with a virtual machine. I tried detaching them from any VMs and trying again, but I got the same results.

Any ideas? Maybe the "descriptor above max size" is a hint?

Some more info: the .vmdk disks were created on other computers. I just copied them to mine and created new VMs around them, but they work fine. All the .vdi files were created on my machine. Not sure if that affects anything.

Update: WinMount can mount the file.. so the problem seems to be with vmware-mount.

  • Umm I don't think VirtualBox disk images (.vdi) can be mounted under a VMware utility, unless I'm missing something.

    The .vmdk files are VMware disk files, which as of v2.1 VirtualBox can use (but it can make breaking changes to them that prevents them working on VMware again.)

    Claudiu : Ah I didn't realize they were two different softwares. In any case, vmware-mount fails even on the .vmdk file . Can you recommend command line utilities that will work on .vmdk and .vdi files? (It can be two separate ones).
    Andy Shellam : Yeah, from what I read it looks like VirtualBox modifies the .vmdk file so it cannot be read again with VMware. Which OS is your host system running? If Windows, this thread http://forums.virtualbox.org/viewtopic.php?t=4748 might prove useful.
    Claudiu : ah got it, that makes sense. thanks!
  • If your host is linux, you can try this: HOWTO: Mount any VBox-compatible disk image on the host

    From Sunny

what is a blade

In cisco (FC switches) we 've it mentioned something like fc1/6 or fc2/4 meaning fc switch blade 1 port 6 or FC switch blade 2 port 4..

I would like to know what exactly that blade means?? I tried googling but did not get a clear answer...

Plz help .

Thanks in advance..

  • "Blade" is just a colloqiual term to mean "card in a slot in a chassis". I'm not sure of the source of the term, but it's been in common use since I first worked with chassis-based Ethernet switches back in the late 90's.

    Typically, "blades" were cards with things like Ethernet, fibre channel, or other types of ports on them. "Blade" servers are server computers designed in a form factor to "slot" into a chassis much like network switch "blades".

    In the case of your Cisco fiber channel switch, the "blade #" is just referring to the slot number (which, typically, is silk-screened on the chassis beside the slot).

PowerShell: Copy all AD users of an OU to another OU

I'm trying to copy all users of OU "A" to the OU "B". My PowerShell shot at this is

$sourceEntry = [ADSI]"LDAP://OU=A,DC=demo,DC=com"
$targetEntry = [ADSI]"LDAP://OU=B,DC=demo,DC=com"

$searcher = New-Object DirectoryServices.DirectorySearcher($sourceEntry)
$searcher.Filter = "(objectClass=user)"
$results = $searcher.FindAll()

foreach($result in $results) {
    $user = $result.GetDirectoryEntry()
    $user.CopyTo($targetEntry)
}

My problem is, that $user appears to lack the CopyTo method I try to call. As far as I understand PowerShell, $user is an .NET object of the type System.DirectoryServices.DirectoryEntry ... in Visual Studio I find the method CopyTo ... in PowerShell I find none of it's methods, just properties.

I'm just starting with PowerShell, so please help!

  • You can't copy AD users.

    You can move them from one place to another, or you can create new users based on existing ones... but in the latter case, you have to supply new user names, passwords and a few other things; it's not as simple as a "copy & paste" operation.

    Users are security principals, they must be unique in a given domain; you can't have two "identical" user objects in different OUs.

    mfinni : Correct and Yup. In NDS (Novell Directory Services) you can have the same username in multiple OUs. Not in AD.
    Hinek : Ok, but the MoveTo method isn't accessible either ...
    From Massimo
  • According to this StackOverflow question, you should use the PSBase member of a DirectoryEntry object in order to access all of its methods. Try this:

    foreach($result in $results) { 
        $user = $result.GetDirectoryEntry() 
        $user.PSBase.MoveTo($targetEntry) 
    }
    
    Hinek : Ok, thanks, this works. Together with your first answer (You can't copy AD users) this is the answer I searched for.
    From Massimo

what servers currently ship with AMD SR56xx chipset for Opteron CPU?

which servers currently ship with AMD SR56xx chipset mainboards? HP? Dell?

  • The only ones I know are currently shipping are HP's DL165 G7 and SL165z G7 (both using SR5670's).

    From Chopper3

Anti-spam software for Cent OS 4.8 w/ cPanel

My company has a dedicated server running CENTOS Enterprise 4.8 with cPanel. We host our own websites as well as about three dozen client websites. We recently migrated our clients over from a Rackspace email solution and many of our clients are complaining that they are getting a lot more spam now. We do have SpamAssassin enabled and set to be very aggressive (2 or 3) but spam still seems to be an issue.

Any recommendations for server side spam filtering software? Ideally it would complement SpamAssassin but if it is robust in-and-of itself a SpamAssassin replacement would also be acceptable.

  • I recommend MailScanner.

    It's a significant improvement from SpamAssassin alone, and can provide anti-virus protection as well. I've used it for 5+ years. Recently, we upgraded to a cluster of hardware-based anti-spam/anti-virus scanners.

    John Conde : Thank you for the recommendation. I will check it out.
    From Joe

Is it possible to configure grovel.exe to ignore specified directories?

Is it possible to disable grovelling of particular directories? If so how?

My understanding of grovel is that it reduces file duplication by having one copy of the file and multiple links to it.

We have a problem where our software occasionally fails to open a file (Paradox DB). Using FileMon we can see that grovel.exe is accessing the files our program is trying to update, so the initial thought is that maybe that's causing our problem. If possible we would like to configure grovel so that it doesn't process our data directory.

Thanks

  • Sounds like this article (How to exclude directories from the Single Instance Store Groveler) is probably what you're looking for. I'm not sure that it's actually causing your problem but I would definitely be wary of doing SIS on a shared-file database such as the one you describe.

    WileCau : @Evan, I agree, we weren't aware the client was running SIS on the same box as our software. Usually we have a dedicated machine. Your suggestion did the trick, thanks.

Setting pythonpath for Trac running as fast-cgi

I have a trac install set up using fast-cgi on a server. I'm trying to install the bitten extension to it so I can do continuous integration - however the environment on which trac is running for some reason doesn't have ~/lib/pythonX.X/ in its python path - only /usr/lib/pythonX.X/ this is problematic since I can't install bitten with admin privileges (I don't have them).

What I'd like to do is change the pythonpath of trac so that it does search in ~/lib/pythonX.X/ but I can't figure out where to set or modify this. The trac install obviously doesn't read my .bash_profile so although I can set it there, that's no help.

Has anyone done this before, or can at least point me in the right direction?

  • Figured this out about ten minutes after posting the question (which was about two hours after starting to try to figure it out)

    When dealing with trac as fastcgi, in your trac environment directory there is a file called index.fcgi. This file allows you to set environment variables for that particular trac install.

    I simply added

    export PYTHONPATH=${PYTHONPATH}:/home/username/lib/python2.4/site-packages
    

    as the second line of that file, and things magically started working!

    From Zxaos

How to Edit Domain Password Complexity?

Hi All, :)

My Domain Environment is

2 Domain Controller ( Main & Secondary )

DHCP

Mail Server

Internet Server & ISA Server

2 DNS Server Primary & Secondary

My problem i tried to Remove Password Complexity in my 2 domain Controller but i still receive error message that the password doesn't meet password complexity and i tried to run gpupdate /force after i disabled password complexity and check other condition

any one know why

I use windows server 2003 Stand alone

  • I'm assuming you did something like what's described here. If you created a new GPO at the root of the domain, rather than editing the "Default Domain Policy" be sure that the new GPO has a lower "link order" number than the "Default Domain Policy". GPOs with a lower "link order" are applied last, and thus have a "higher precedence". (The whole idea of "precedence" in GPOs frustrates me... It's SO much easier to just think about it like "This GPO is applied over the other GPOs, so the settings here end up overriding previously-applied GPOs..." )

    (While I'm complaining: I wish Microsoft would make up their mind re: dialogs that "order" items in "precedence" whether items lower in the list have a higher "precedence" than items higher in the list, etc. Globally, every part of the OS, and arguably all of their products, should follow the same pattern.)

How do I increase maximum attachment size in Exchange 2007 SP1?

I've been looking all over for a relatively simple answer to a fairly straightforward question: "how do I increase the maximum size of attachments that can be sent and/or received in Exchange 2007?". But I have yet to find a solution that works.

We have a pretty straightforward setup: Exchange 2007 SP1 running on a single server, with the OWA role delegated to a second server. We did a clean install of Exchange 2007 a year or two ago: we did not upgrade from a previous version. I forget if we installed RTM and then patched it to SP1, or if we installed with SP1 already baked in. I just thought I'd mention those items, in case they influence the answer.

So far, I've tried running the following Powershell commands on the main Exchange server and verified that they've taken effect:

Set-TransportConfig -MaxReceiveSize 40MB
Set-ReceiveConnector "RcvConnector" -MaxMessageSize 40MB
Set-MaxReceiveSize "MailboxName" -MaxReceiveSize 40MB

As of right now, though, the specified mailbox is still rejecting messages over 10MB.

You get brownie points if you can also tell me how to set the default mailbox attachment size limits, so that new accounts don't have default Set-MaxReceiveSize values of "unlimited" they currently do.

Any advice or suggestions would be greatly appreciated. Tx in advance!

  • Your best bet is to go over this MSDN article (Managing Message Size Limits) that covers just about everything size & limit related in Exchange 2007.

    It sounds like the Organizational limits aren't getting set correctly. Check the Transport settings by going into the Exchange Management Console, Organization Configuration, Hub Transport, then the Global Settings tab. The properties on the Transport settings are what I would check to see if that's causing your limits.

    From Joe Doyle
  • I've confirmed that the limits in the Global Settings tab are correct.

    I thought this article stood the best chance of addressing my question. But despite verifying that each script took effect using the Get-TransportConfig, Get-ReceiveConnector, Get-SendConnector and Get-Mailbox cmdlets, it's still not working.

    FWIW, I'd be OK focusing on just increasing the max receive attachment size at first, if that simplifies things.

SharePoint / SQL Server "out of memory" error

After months of preparation, we launched a new SharePoint intranet portal today. Immediately, some users began getting a "server out of memory" error when they tried to log in. The SharePoint server appeared to be fine, but the SQL Server was reporting 100% memory use. (It has 4 GB.)

We rebooted the server and have not had further memory problems, though memory usage is hovering around 60% or above. I'm not convinced that we have solved the problem; I suspect it may return Monday morning when the whole staff tries to log in again.

I'm not a database guy, and I'm stumped about how to troubleshoot this. Do we need more memory, or is there somewhere I should look to reduce memory usage?

  • Well, first thing to note is by default SQL server will suck up all memory available on the server over time. You can change this setting by going into the management studio, going to the server properties, memory and modifying the "maximum server memory" option to some number smaller than your 4GB of memory. If the server doesn't have enough memory for your installation you will of course have poor performance still but at least this will prevent SQL from eating it all up directly.

    aardvark : If I reduce the memory available to SQL Server, wouldn't it still report "out of memory" when it reached the limit? Or am I missing something?
    Brian Knight : Not so much - the limitation is a top amount for SQL Server. SQL will begin to flush buffer pages to disk in order to free up memory when this threshold is approached. Like Charles said, it will not be great performance, but it will save your server from crashing due to lack of memory.
    Charles : Well, and from your comments it sounds like sharepoint is out of memory not SQL, which would make sense if SQL is hogging it all. Limiting the amount SQL can use would leave memory available for OS, Sharepoint, etc.
    aardvark : I should add that Sharepoint and SQL Server each have their own dedicated server. The SQL Server machine is the one reporting 100% memory usage, but maybe that's normal.
    From Charles

sudo chown - prevent ../

Need to allow a user to chown files under a particular directory using sudo. This does the trick:

user1 ALL= NOPASSWD: /bin/chown -[RPfchv] user2\:user2 /opt/some/path/[a-zA-Z0-9]*

But, does not prevent the user from being sneaky and doing something like:

[user1@rhel ~] sudo /bin/chown -v user2:user2 /opt/some/path/../../../etc/shadow

Any way to protect from this?
Machine is running Linux (Red Hat)

  • sudo introduces inherent security risks and it is generally ill advised to give to users that don't have high levels of trust.

    Why not limit simply to recursive chown for the parent directory?

    sudoers primarily uses globbing. According to the manpage, it doesn't match / on wildcards. More details in the manpage.

    As far as a more advanced solution, a script should do the trick.

    [root@server wmoore]# egrep '^wmoore' /etc/sudoers
    wmoore ALL= NOPASSWD: /bin/chown -[RPfchv] wmoore\:wmoore /home/wmoore/[a-zA-Z0-9]*
    
    [wmoore@server ~]$ sudo -l
    User wmoore may run the following commands on this host:
        (root) NOPASSWD: /bin/chown -[RPfchv] wmoore:wmoore /home/wmoore/[a-zA-Z0-9]*
    
    [wmoore@server ~]$ sudo chown -R wmoore:wmoore /home/wmoore/../../tmp/test
    Sorry, user wmoore is not allowed to execute '/bin/chown -R wmoore:wmoore /home/wmoore/../../tmp/test' as root on server.
    

    Oh, right. sudo package:

    Name        : sudo                         Relocations: (not relocatable)
    Version     : 1.6.9p17                          Vendor: CentOS
    Release     : 3.el5_3.1                     Build Date: Tue 24 Mar 2009 07:55:42 PM EDT
    

    CentOS5.

    From Warner
  • Try making a 'chroot replacement' script, which validates input before doing the chown() thing. Then add this script instead of /bin/chown to sudoers file. You may then set up 'chown' alias for users if that is needed.

    On the other hand, are you sure your users need to do chown with root privileges? Maybe sgid or suid bits on the directories will solve your problem?

OSX server 10.5 Permissions in POSIX, how to get to ACL?

I'm using an OSX 10.5 server and am having some permission issues using the current POSIX system. I want to switch to ACL, but I can't for the life of me figure out how this is done.

Does anyone have any experience with this?

Thanks in advance.

  • I just figured this out. From the workgroup manager file sharing permission settings... You open up the groups and drag the group you want into the ACL section.

    Then save settings and propagate down the line.

    From kylex

Why does symlinks are not enabled - apache

I have mapped my apache to the root /var/www/vhosts. If I put there files/folders, I see them and can surf to them. But, If I put symlinks in /var/www/vhosts I get 403 - no permissions.
I have the following directives for this folder:

<Directory "/var/www/vhosts">
   Options Indexes FollowSymLinks
   AllowOverride All
</Directory>
  • What are the symlinks to?

    What are the permissions on the data being symlinked?

    What user and group does Apache run as in your config?

    From Warner
  • Where are the symlinks to? What are the permissions/owner/group on these files? Do an ll on the directory above so we can see if that's the issue. When you are on the server, are you able to navigate these symlinks ok?

    DaveG : An `ll`? I'm presuming you have `ll` aliased to `ls -l`.
    AliGibbs : Sorry, yes, that's what I meant
    From AliGibbs
  • You have to make sure that apache has read permissions on the files you are trying to serve.

    From Kousha

Performance mitigations serving content from a UNC share via IIS 6

I have a quad processor vmware instance running Windows 2003 and 1gb ethernet. I'm comparing serving the exact same heavy .NET 2.0 content from the local hard drive versus serving it from a UNC drive.

If I use WCAT to load it down, I see about a 40% reduction in transactions/sec while serving from the UNC. Processor time barely moves from 45% and the NIC sits around 40% either way. I don't see any significant memory loading either way. Context Switches/Transaction, though, more than doubles when serving from the UNC. Pathlengths more than double as well, but I believe that's just an expression of the effect of context switching.

All told, it looks like the bottleneck is processor switching while waiting on content from the UNC share. Is my experience about the norm? Is there some mitigation I might try?

I twiddled HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\MaxCmds a little bit per http://technet.microsoft.com/en-us/library/dd296694(WS.10).aspx, but to no obvious effect. I kind of doubt my problem is lack of connections, but rather just the act of switching from thread to thread while waiting on data.

  • We found that VMWare VM's degrade the network substantially, not in throughput, but latency, something like 40%. Our specific case was Active Directory traffic. However perhaps using physical is not something you can do.

    : Some of our production use will be on VM, but not all. I'll probably have to dig up a physical box somewhere and test there. Thanks.
    grub : Which virtual ethernet card do you use? maybe you can gain some performance with another driver. for example the new vmxnet3 card which is available with VMware vSphere.
    From echobeach2

How to install mod_wsgi 3.1 on Ubuntu 9.10

I have a Python 3 web app so mod_wsgi < 3.1 doesn't cut it for me. However, on my Ubuntu 9.10 installation there doesn't seem to be a package for mod_wsgi 3.1.

  1. Is there an alternative repository that has a package for mod_wsgi 3.1?
  2. There's a new Ubuntu release not so long from now, will it contain mod_wsgi 3.1?
  3. Some other distro ready with mod_wsgi 3.1 to recommend?
  4. Maybe my best bet is to compile it myself? From a quick google it looks like I only need the python and apache dev packages installed.

Thanks!

  • For question 2, it looks like Lucid (10.04) will have 2.8. Probably compiling it yourself should not be too difficult.

    Graham Dumpleton : Which is sad in itself given that even Ubuntu and Redhat Fedora have mod_wsgi 3.2 in their repositories now. Ubuntu just isn't keeping up these days if they only have mod_wsgi 2.8. :-(
    From fission

SVN access: svnserve works but Apache doesn't

I set up SVN on my Windows XP machine using the BitNami stack. I've gotten svnserve to work, but Apache access seems to not work:

C:\>svn list svn://localhost
test/

C:\>svn list http://localhost
svn: OPTIONS of 'http://localhost': 200 OK (http://localhost)

Can someone help me figure out what is wrong?

More inf:

The repository is in R:\SVN\ :

C:\>dir r:svn
 Volume in drive R is eSATA RAID
 Volume Serial Number is 6CFC-640E

 Directory of R:\svn

02/25/2010  06:38 PM    <DIR>          .
02/25/2010  06:38 PM    <DIR>          ..
02/25/2010  06:38 PM    <DIR>          conf
02/25/2010  06:41 PM    <DIR>          db
02/25/2010  06:38 PM                 2 format
02/25/2010  06:38 PM    <DIR>          hooks
02/25/2010  06:38 PM    <DIR>          locks
02/25/2010  06:38 PM               234 README.txt
               2 File(s)            236 bytes
               6 Dir(s)  2,000,274,423,808 bytes free

My httpd.conf file looks like this (with comments removed):

ServerRoot "C:/Program Files/BitNami Subversion Stack/apache2"
Listen 80
LoadModule actions_module modules/mod_actions.so
LoadModule alias_module modules/mod_alias.so
LoadModule asis_module modules/mod_asis.so
LoadModule auth_basic_module modules/mod_auth_basic.so
#LoadModule auth_digest_module modules/mod_auth_digest.so
#LoadModule authn_alias_module modules/mod_authn_alias.so
#LoadModule authn_anon_module modules/mod_authn_anon.so
#LoadModule authn_dbd_module modules/mod_authn_dbd.so
#LoadModule authn_dbm_module modules/mod_authn_dbm.so
LoadModule authn_default_module modules/mod_authn_default.so
LoadModule authn_file_module modules/mod_authn_file.so
#LoadModule authnz_ldap_module modules/mod_authnz_ldap.so
#LoadModule authz_dbm_module modules/mod_authz_dbm.so
LoadModule authz_default_module modules/mod_authz_default.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_host_module modules/mod_authz_host.so
#LoadModule authz_owner_module modules/mod_authz_owner.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule autoindex_module modules/mod_autoindex.so
#LoadModule cache_module modules/mod_cache.so
#LoadModule cern_meta_module modules/mod_cern_meta.so
LoadModule cgi_module modules/mod_cgi.so
#LoadModule charset_lite_module modules/mod_charset_lite.so
LoadModule dav_module modules/mod_dav.so
LoadModule dav_fs_module modules/mod_dav_fs.so
#LoadModule dav_lock_module modules/mod_dav_lock.so
#LoadModule dbd_module modules/mod_dbd.so
LoadModule deflate_module modules/mod_deflate.so
LoadModule dir_module modules/mod_dir.so
#LoadModule disk_cache_module modules/mod_disk_cache.so
#LoadModule dumpio_module modules/mod_dumpio.so
LoadModule env_module modules/mod_env.so
#LoadModule expires_module modules/mod_expires.so
#LoadModule ext_filter_module modules/mod_ext_filter.so
#LoadModule file_cache_module modules/mod_file_cache.so
#LoadModule filter_module modules/mod_filter.so
LoadModule headers_module modules/mod_headers.so
#LoadModule ident_module modules/mod_ident.so
#LoadModule imagemap_module modules/mod_imagemap.so
LoadModule include_module modules/mod_include.so
#LoadModule info_module modules/mod_info.so
LoadModule isapi_module modules/mod_isapi.so
#LoadModule ldap_module modules/mod_ldap.so
#LoadModule logio_module modules/mod_logio.so
LoadModule log_config_module modules/mod_log_config.so
#LoadModule log_forensic_module modules/mod_log_forensic.so
#LoadModule mem_cache_module modules/mod_mem_cache.so
LoadModule mime_module modules/mod_mime.so
#LoadModule mime_magic_module modules/mod_mime_magic.so
LoadModule negotiation_module modules/mod_negotiation.so
#LoadModule proxy_module modules/mod_proxy.so
#LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
#LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
#LoadModule proxy_connect_module modules/mod_proxy_connect.so
#LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
#LoadModule proxy_http_module modules/mod_proxy_http.so
#LoadModule rewrite_module modules/mod_rewrite.so
LoadModule setenvif_module modules/mod_setenvif.so
#LoadModule speling_module modules/mod_speling.so
#LoadModule ssl_module modules/mod_ssl.so
LoadModule dav_svn_module modules/mod_dav_svn.so
LoadModule authz_svn_module modules/mod_authz_svn.so
#LoadModule status_module modules/mod_status.so
#LoadModule substitute_module modules/mod_substitute.so
#LoadModule unique_id_module modules/mod_unique_id.so
#LoadModule userdir_module modules/mod_userdir.so
#LoadModule usertrack_module modules/mod_usertrack.so
#LoadModule version_module modules/mod_version.so
#LoadModule vhost_alias_module modules/mod_vhost_alias.so

<IfModule !mpm_netware_module>
<IfModule !mpm_winnt_module>
#
# If you wish httpd to run as a different user or group, you must run
# httpd as root initially and it will switch.  
#
# User/Group: The name (or #number) of the user/group to run httpd as.
# It is usually good practice to create a dedicated user and group for
# running httpd, as with most system services.
#
User daemon
Group daemon

</IfModule>
</IfModule>

ServerAdmin webmaster@example.com

ServerName localhost:80

DocumentRoot "C:/Program Files/BitNami Subversion Stack/apache2/htdocs"

<Directory />
    Options FollowSymLinks
    AllowOverride None
    Order deny,allow
    Deny from all
</Directory>

<Directory "C:/Program Files/BitNami Subversion Stack/apache2/htdocs">
    Options Indexes FollowSymLinks
    AllowOverride None
    Order allow,deny
    Allow from all

</Directory>

<IfModule dir_module>
    DirectoryIndex index.html
</IfModule>

<FilesMatch "^\.ht">
    Order allow,deny
    Deny from all
    Satisfy All
</FilesMatch>

ErrorLog "logs/error.log"
LogLevel warn

<IfModule log_config_module>
    #
    # The following directives define some format nicknames for use with
    # a CustomLog directive (see below).
    #
    LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
    LogFormat "%h %l %u %t \"%r\" %>s %b" common

    <IfModule logio_module>
      # You need to enable mod_logio.c to use %I and %O
      LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
    </IfModule>

    CustomLog "logs/access.log" common

</IfModule>

<IfModule alias_module>

    ScriptAlias /cgi-bin/ "C:/Program Files/BitNami Subversion Stack/apache2/cgi-bin/"

</IfModule>

<IfModule cgid_module>
    #
    # ScriptSock: On threaded servers, designate the path to the UNIX
    # socket used to communicate with the CGI daemon of mod_cgid.
    #
    #Scriptsock logs/cgisock
</IfModule>

#
# "C:/Program Files/BitNami Subversion Stack/apache2/cgi-bin" should be changed to whatever your ScriptAliased
# CGI directory exists, if you have that configured.
#
<Directory "C:/Program Files/BitNami Subversion Stack/apache2/cgi-bin">
    AllowOverride None
    Options None
    Order allow,deny
    Allow from all
</Directory>

DefaultType text/plain

<IfModule mime_module>
    #
    # TypesConfig points to the file containing the list of mappings from
    # filename extension to MIME-type.
    #
    TypesConfig conf/mime.types

    AddType application/x-compress .Z
    AddType application/x-gzip .gz .tgz

</IfModule>

<IfModule ssl_module>
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
</IfModule>
                Include "C:\Program Files\BitNami Subversion Stack/apache2/conf/ssi.conf"
<Location /subversion>
DAV svn
SVNPath "R:\SVN"
</Location> 
  • You're telling Apache that your Subversion directory should be accessed at the /subversion URL, not the root. So you're URL should be http://localhost/subversion.

    rlbond : Thanks! Works perfectly.

Mercurial (hg) with active directory

I can setup mercurial to authenticate users on Active Directory? In my case, hg can run on windows, linux or freebsd, but I need to use AD users.

NOTE: if it's possible, please appoint me than tutorial.

  • Well, I started with this tutorial.

    After I've finished that I made the following additional changes on the server (Windows 2008):

    • Configured IIS to use SSL;
    • Disabled anonymous authentication for the site;
    • Enabled Basic and Windows authentication for the site;
    • Configured NTFS permissions on the repository folder.

    Also need to add the following lines to your repository's .hg\hgrc file:

    On the client-side I had to explicitly specify username and password.

    [web]
    allow_push = *
    
    Richard Slater : +1 for the guide
    From Regent
  • I wrote a 4 part blog post a couple of months back that allows you to use Active Directory/IIS to host Mercurial's web server. It works a treat:

    http://www.endswithsaurus.com/2010/05/setting-up-and-configuring-mercurial-in.html

    It walks you through:

    • Set up of Mercurial within IIS
    • Configuring the ISAPI extensions for Python
    • ISAPI rewrite to hide ugly URLs
    • Configuration of security privileges using Active Directory
    • Customization of the web UI

Mirroring database in sql server 2008

When pricipal database is having less changes than mirror database i.e mirror_failover_lsn is greater than mirror_end_of_log_lsn of principal server, the mirroring session got suspended nad couldn't get resumed. Why so? . Now how can I restore my database and how to establish the session again?

  • What mirroring mode are you using?

    Are you saying that you have writeable access to both databases in the mirror? If so you may be affected by one of these bugs: 978947,978791

    What status is your principal/mirror in? check this by looking at the sys.database_mirroring DMV on each server.

  • You could try one of these options:

    May work - Pause/suspend mirroring - Backup database and logs and copy backups to mirror - Restore backups and logs with NORECOVERY - Try and resume mirroring

    Will work - Remove mirroring - Restore latest backup and log files to mirror - Re-run mirroring setup

SYNC folder with server in drive D

i am using windows 2003 server and windows 7 as client.

i have one folder in drive D i want to SYNC that folder with server.

how can i write login script for that?

Thanks

  • The robocopy command is bundled with Windows 7 and allows you sync a folder to another destination. Try adding robocopy to your login script like this:

    robocopy d:\path\to\source s:\path\to\destination /MIR /Z
    

    The /MIR option ensures that the destination folder always reflects the source folder.

Remotely sync Time Machine drives

I have an Xserve that runs Time Machine to a local terabyte drive. I also connected my external terabyte drive for a time period and had Time Machine use it to establish the seed data.

I plan to take my drive back home with me (out of state) and have the Xserve return to using its local drive for Time Machine. But when I get back home, is there a way to keep my external drive's copy of the Time Machine Backups folder in sync with the Backups folder back on the Xserve? I'm wanting a full copy of the history (makes an awesome remote backup).

I've thought of using the unix command rsync. In fact, that's how I had been doing it but I was looking the compactness that Time Machine was able to achieve.

Thanks.

  • Rsync should work fine for keeping the new xserve generated deltas current on your remote drive -- as long as you're only pulling from the xserve and not trying to push changes you make on the remote drive back to the xserve's time machine...

    If you want bi-directional file synchonization (something like unison might provide that) I don't think that's going to play nice with time machine.

Can't pop3 from exchange server after a reboot

Last night I shutdown my Exchange 2003 Virtual Machine, I added a new VHD (For backups), and booted it again. Now I can't POP3 email from it with Outlook 2007. In Outlook I get the error:

Task 'blake@MyDomain.com - Receiving' reported error (0x800CCC0F) : 'The connection to the server was interrupted. If this problem continues, contact your server administrator or Internet service provider (ISP).'

Does anybody know what is wrong? All I did was a reboot. I haven't formated the added disk. There are no weird errors in the event log.

I can still send mail with Outlook over port 25.

I can send and recieve mail with OWA.

I can POP3 the mail to my phone (it take about 15 minutes after sending a message, but I do get it eventually).

EDIT:
The 'Microsoft Exchange POP3' Service says that it is started but if I stop it and try to start it again, it fails saying 'Could not start the Microsoft Exchange POP3 service on Local Computer. Error 1053: The service did not respond to the start or control request in a timely fashion.'

I did some googling and someone on exchangefreaks.com said that if I use task manager to 'End Task' on inetinfo.exe, then I can start the POP3 service fine.

Does anyone know what causes this problem? I am fine for now since I did get the Service started, but If it does this after every reboot...

  • I have the same problem - on restart of the server pop service is not available. What I do is I start it manually in Exchange System Manager ( Servers -> {Your server name } - Protocols -> POP -> default ) right click and choose start from popup

    Not sure how to make the virtual pop server start automaticaly