Wednesday, January 19, 2011

openvpn and virtualbox

Hi guys, I have a linux machine on which I occasionally run Windows XP in Virtual Box. All runs wonderful, except for the openvpn in XP, which can't connect to the vpn server running on a remote machine. The vpn client works from linux ... as far as I read until now it seems to be a problem of port forwarding ... I keep getting this error: TCP/UDP: Incoming packet rejected from 10.0.2.2:1194, expected peer address: (allow this incoming source address/port by removing --remote or adding --float) , but have no idea how to fix it.

  • I suspect that Kamil is right.

    Change your networking in VirtualBox to Bridged instead of NAT, and I think this will work a lot better.

  • Also note things like this in the VirtualBox 3.1.2 user manual, section 6.3.3 "NAT Limitations" on page 94:

    Protocols such as GRE are unsupported: Protocols other than TCP and UDP are not supported. This means some VPN products (e.g., PPTP from Microsoft) can- not be used. There are other VPN products which use simply TCP and UDP.

    In case it helps, one thing I've done in the past, is setup openvpn on the host. This way, all the guests using NAT have access to the vpn tunnel.

    Christopher Cashell : That shouldn't matter in this case. OpenVPN is pure TCP or UDP, at least as far as the host/transport is concerned.
    From Stéphane

How paranoid should I be with external IIS web servers accessing unauthenticated internal wcf services?

Environment:

  • web and app servers are Windows Server 2003 Enterprise / IIS 6
  • Web Server is behind a firewall - ports 80 and 443 are open to the world.
  • Between the WebServer and the AppServer traffic goes through a firewall and only port 80 is open
  • The webserver external website makes calls to WCF services on the app server. These WCF service calls are completely unauthenticated - but perform very critical data updates to a database server.

I assume (correct me if I'm wrong) that compromising the webserver would require an attack along port 443 or 80 from the outside world - thus it would require an IIS exploit to compromise the server.

Questions:

  1. How bad is this configuration when dealing with critical data?
  2. If the webserver is compromised, is there anything that can be done to mitigate its impact and for most scenarios prevent arbitrary invocation of the WCF services?
  3. Is there a list of the "typical" impacts of historic IIS vulnerabilities?
  • You are missing an important point. Attacks are not only done from outside. So if someone compromises the firewall, they can invoke the WCF service. Second scenario would be that someone mimics to be the webserver, that will fool the firewall and that someone would invoke the WCF service. Since you said the data is critical, the risk is bigger than the cost for the authentication.

    Nathan : How would someone "mimic to be the firewall"? Are you simply referring to ip address spoofing or something else?
    Peter Schuetze : I was not talking about that someone mimics to be the firewall. But, that someone with insider knowledge and access to your internal network is attacking the system. This way, you avoid the firewall between the outside world and the webserver. IP spoofing might be one of the scenarios. THe whole attempt with my answer was to shift the focus from 'all' attacks come from the outside to attacks can come from outside and inside (e.g. emploiyees and contractors).
  • Ensuring that domain and server isolation is set up will secure the traffic between the 2 servers. As long as your developer is using the proper injection prevention techniques, The only way I could see being able to invoke the WCF would be though a remote code execution vulnerability. There aren't too many of those and even if unpatched I think they all use the same rights as your worker process identity (which according to best practice should be locked down).

    I would highly reccomend that you take a look at the WCF security guidelines from the patterns and practices group, It's pretty easy to implement security for WCF (simple message signing comes to mind) that would not require that the traffic be authenticated (however domain and server isolation automagically implements that without impact to the application)

    Nathan : Doesn't the worker process identity have access to invoke the WCF Services by necessity? In that case, from a remote code execution vulnerability standpoint is anything really gained from authenticating the WCF services?
    Jim B : no, the worker process identity has the ability to send messages, assuming the server is compromised somehow, the best an attacker can do is send an unsigned message since he would have to be able to know how to invoke the applications code to correctly send a signed message. Regardless of platform, if your app is compromised you are hosed (which is why the guide focuses on app security since windows server security is pretty easy to set up). To put this in perspective, every large data breach in the past 5 years could have been prevented by simply implementing domain and server isolation
    From Jim B

Balancing between load and site stability (500 errors) with max-procs in lighttpd

I have a rather large dynamic site running off a single lighttpd powered box (1.3 million page views per day).

But Im frequently getting random 500 errors on the site, which sometimes go away within a second or 2... or sometimes not going away until lighttpd service is restarted.

If I set the max-procs to something low... like 2-4..... the server load is relatively low.... about 2-3 (at least for the hardware and the level of traffic), yet Im getting VERY frequent 500 errors. If I raise the procs to like 6-8, the server load doubles.... but I get less of these hiccups.

I currently settled at 6... which works out okay, but I still get quite a few of these intermittent 500 errors... with a non-recovering one every few days that requires a lighttpd restart.

What can I do?

The site is php/mysql powered.

How do I find out how an Active Directory group is being used?

Here's the scenario. I have an Windows 2003 Active Directory security group that was created in 2007, and no one remembers what it is for. Is is possible to find out what permissions in our AD implementation are assigned to this group? (Is it controlling folder permissions, is used to delegate user creation, is it doing nothing?)

We're using Windows 2003 as the AD controllers. The AD controllers are also the root DFS servers, but we're using Openfiler to server the actual SMB/CIFS shares.

The group isn't a member of anything, and the only people in the group at part of the IT staff. I tried accesschk from the sysinternals toolset, but it's not being helpful. Are there any other tools I should look at?

  • If there are not too many people in that group, just deactivate that group and see what happens. It helps to tell anybody in that group what you are doing, so you can quickly investigate when something doesn't work anymore. Don't do that on your last day before you go on vacation. ;)

    The assumption is that you don't user user accounts (from you staff) for automated processes.

    quinnr : Right. :) That's where I'm at. I just hoped there was a more elegant way then doing a scream test. Thanks for your answer!
  • ShareEnum and AccessEnum would be the tools I'd use to try looking for Share/NTFS permissions related to the group.

    http://technet.microsoft.com/en-us/sysinternals/bb897442.aspx

    http://technet.microsoft.com/en-us/sysinternals/bb897332.aspx

    You should also consider that it's entirely possible that users were added to a security group only for the purpose of placing that group in an OU that has a group policy applied.

    GPMC is the quickest way I know of to find out what a group policy actually does.

    http://www.microsoft.com/windowsserver2003/gpmc/default.mspx

    Sorry if I tend to think outside the box first but we did say "no one remembers what it is for". It's entirely possible that no tool will show what the group is for as the group may exist but the permissions or policy that were applied in the past may have been removed.

    quinnr : I know what you're talking about, and it's quite handy. I'm looking for something that will walk through AD and report privileges and permissions assigned to AD Objects. I'm surprised MS hasn't written a tool for this. Thanks for the answer.
    Brian Desmond : What does GPMC reall have to do with this?
    pplrppl : What part of "no one remembers what it is for" did you not get. There are MANY possible uses for a group in active directory and not all of them are permissions based.
    From pplrppl

verisign certificate into jboss server SSL

Hi all, i'm trying to enable jboss to uses ssl protocol using a previously generated certificate from verisign, i imported both certificate, server certificate and ca certificate into the keytore file, and i configured the server.xml to use that keystore and activate ssl protocol, then when i run the jboss, I got this error "certificate or key corresponds to the SSL cipher suites which are enabled"

Question, reading some post on internet, i found that every example was made it generating a Certificate Request, it stricly necesary to do that if i already have the server certificate and that CSR has to be imported into the keystore as well ? at this point i'm very confused about this issue, i tried almost every solutions posted in several forums but till now i haven't any luck !! can you give me some tips in order to solve this problem.

thanks in advance

this are my keystore file: Keystore type: jks Keystore provider: SUN

Your keystore contains 2 entries

j2ee, Dec 29, 2009, trustedCertEntry, Certificate fingerprint (MD5): 69:CC:2D:2A:2D:EF:C4:DB:A2:26:35:57:06:29:7D:4C ugent, Dec 29, 2009, trustedCertEntry, Certificate fingerprint (MD5): AC:D8:0E:A2:7B:B7:2C:E7:00:DC:22:72:4A:5F:1E:92

and my server.xml configuration:

  • When you generate the certificate request in the store then the public/private keypair is generated in the store. These keys are required for SSL Encryption to work. It sounds to me like these were generated on the other system and not are not in your store.

    Usually you can transport certificates and keys around in a file format called pfx but the java keytool stuff doesn't seem to do much with this and as you suggest would like you to generate a new keypair/certificate and then go from there.

    It does seem possible however to import a whole keystore into your new keystore so if you have the old machine/keystore this may be a possibility.

    http://download.java.net/jdk7/docs/technotes/tools/solaris/keytool.html

  • Finally i found a solution for this problem, as you said i need to have my private key, server certificate and ca certificate into my keystore, in this post they explains how to import this 3 existing elements (as in my case) into the keystore using a very useful tool called keyman. http://www.jguru.com/faq/view.jsp?EID=532461

    cheers,

Looking for a virtualization guide, looking to virtualise SBS 2008 and Server 2008 machines

I would like to "play" around and find out more about Virtualisation, mainly for Microsoft servers and PC's.

Any good guides or blogs would be great

thanks!

Remote Desktop settings not being applied for user

We have a number of Win 2003 servers for which we have Remote Desktop enabled. Each user has their profile edited so that they can only connect for 2 hours maximum and have 30 minutes idle time, after which they are disconnected and the session closed. On one server however, the administrator account does not have the maximum session limit working. We can stay connected for days if we want. Originally this was how it was setup, and we later changed the profile for all users so that there are limits. We have rebooted the server a couple of times since, and the Management Console shows the limits. If we are idle for too long we are disconnected.
Other users are having all the limits observed.
Any suggestions?

  • Check the group policies if you have conflicting configurations. Also have a look at this: http://support.microsoft.com/?kbid=940122

  • You may find Resultant Set of Policy (RSoP) snap-in useful in resolving your issue. Essentially it allows you to check policies in effect for a given user or computer in Active Directory environment.

  • I am not sure if I am getting the question right but here goes. So you are saying that at one time this server was set up with no limits on terminal sessions, but now you set the limits and all users that connect to this server have the limits applied to their session?

    And the problem is that the administrator account is not having the limits applied?

    If that is the case it could be Remote Desktop for Administration in Windows Server 2003.

    I could be way off base here, I am just not understanding what you are asking.

    http://support.microsoft.com/kb/814590

Printing to network printers from 32-bit and 64-bit clients

Here at work, we're starting to contemplate the implications of switching from windows XP to windows 7. We have thin clients using XP embedded and Citrix metaframe 4.5 that print to network printers (Konica Minolta's and HP's) and Desktop/Laptop users on XP. We also would like to start moving the servers to Windows 2008 R2, currently we use Windows 2003 r2 and Windows 2008 sp1. The main issue is some of the Windows 7 users would likely be on 64-bit for various reasons, and so we would need 2 copies of the print driver for both 32-bit and 64-bit clients. It seems the only way to install 32bit drivers on windows server 2008 r2 is through some odd remote install process and the name has to match the 64-bit version.

So far we've considered, trying to use the HP universal print driver for PCL 5 or manually installing 2 different drivers for every printer from clients that need it (nightmare)...Am I way off base with this issue? Am I missing a far more obvious solution?

  • Unless I misundertand you, you should be able to add all necessary printer drivers on the server without issue. Open the Printers folder on the server, right-click an empty area in the explorer window (or use the "File" menu) then select "Run as Administrator" => "Server Properties."

    There will be a "Drivers" tab; from there you can add additional printer drivers - you will see it indeed shows the processor for each driver installed. In this way you can install x86, x64, etc drivers and when a client connects to the printer, the appropriate drivers will be installed automatically on the client. This feature is called "Point and Print."

    From Mark Sowul
  • I know that we were using a x64 Citrix server for a while then moved it back to x86 because it was a hassle trying to find 64 bit drivers for all the printers we were using. Now saying that I don't believe that the client operating system matters because x86 clients connected to that server all the time and they never had an issue with printing through the x64 Citrix server.

    I believe it all depends on the server side, if you are using a mix of x86 and x64 servers then you will need both versions of the driver. When I say both version I mean on the x64, install x64 drivers, on the x86, install x86 drivers. But you do not need to install an x86 driver on a x64 server for Presentation Server.

Jungledisk for backing up my Wiki and Drupal sites?

Hi would you guys suggest that I ask my webhost to install Jungledisk so that my backups would be stored to Amazon S3 as well? Do you guys know a better solution?

Thanks in advace!

  • So far I'm really liking jungle disk - I'm just now checking out their server version but I think we will be rolling it out here.

    1. I don't see how it could hurt to ask them to install it if that is the way you want to go.
    2. I don't know of any good competitors in the same feature / price range. Especially now that the rackspace cloud does not have transfer fees, and cost about the same for storage.

    If we decided to move forward with jungledisk in a timely manner I'll update this.

    From Chance
  • I'm a fan of crashplan... it's not S3 but its somewhat easier to deal with.

    From
  • I am using JungleDisk on more than a dozen servers pushing changes to hosted files off-site nightly. We also do local backups for fast recovery if a single server fails. The thing with any Internet-based backup solution will be the time required to restore files. Some services will put your data on a disk and mail it to you, but JungleDisk is not (as of this writing) in that category. Use it for disaster recovery and a last-ditch way to get your files back, but don't rely on it as your primary backup as restore times will be long if you're pushing a lot of files around.

Can I setup a link SQL server connection between servers on different networks?

We have a production SQL server hosted offsite at a hosting company, and we have a staging environment within our own network. We want to be able to setup a SQL job that copies content from a table on the staging server to prod on a regular basis, and I think we need to setup a linked server connection to do this. What do I need to get the hosting company to do to allow us to set this up? We have RDP access to the production servers, I just need to know what network and security configurations need to happen from the hosting company's perspective so I can ask them to do it.

  • A linked server is not the best option.

    • it opens the SQL Server for remote T-SQL execution, a very serious security hole
    • it requires SQL password based authentication because of the different domains involved
    • it does not offer any redundancy when faced with spotty conectivity
    • TDS as a protocol is not designed for speed

    A much better alternative is to use Service Broker:

    • SSB operates on a dedicated port that does not allow arbitrary T-SQL commands, like a linked server
    • SSB supports certificate based authentication accross distinct domains
    • Message fragmentation and delivery fairness ensures a smooth operation over bad/slow connections
    • SSB uses a high throughput protocol designed for speed, sam eprotocol used in database mirroring.

    If you insist on the linked server then you must:

    • enable SQL Server to listen on the public internet addressed
    • enable TCP ion the server and open the SQL listenening port (default TCP 1433) on the firewall. If the server listen on non-default ports, the you must start the SQL Browser service and open port 1434 UDP on the firewall, and allow sqlservr.exe to open arbitrary ports on the firewall.
    • you must enable SQL Authentication to allow for SQL based password.
    • To protect the traffic you should ensure SSL is used see How to enable SSL encryption for an instance of SQL Server by using Microsoft Management Console and How to Enable Channel Encryption.
    • Check, re-check and double-recheck that the [sa] login has a bulletproof password that is known only by people that you have absolute 100% trust. Your TDS port opened to the internet will be subject to a constant barage of brute force attacks on [sa] from a million automated bots.

gateway-to-gateway timeout.

How do I make a backup of all settings on my symantec gateway 320, my company needs a new one due to having to reset all the time because one of our tunnels is contstantly timing out. Company has no contract with Symantec. So they are not allowing me to speak with technician. Any ideas why I would have to reset the firewall multiple times ona daily basis in order to keep consistent connectivity to another gateway-to-gateway tunnel?

  • Heat issues is by far the most likely, followed by OS bugs.

    From LapTop006
  • Is the SGS 320 connected to a UPS that provides quality power? If not, that could be an issue. Have seen this numerous times with similar dcevices.

    As mentioned by LapTop006, heat can be an issue. Have seen the SGS stacked with a modem and a switch or some other combo and heat gets to one or several devices.

    My SGS 420 has been rock solid for about a year but it is not stacked and it has power from a UPS.

    What do the SGS logs show? Any additional clues?

    Is the connection made using an ADSL modem or other device? Same power and heat issues would apply

    Failing that:

    To back up a Symantec Gateway Security 300/400 series appliances configuration

    Turn off the appliance, turn DIP switches 1 and 2 to the on (up) position, and turn on the appliance. Copy the symcftpw.exe utility from the CD-ROM to a folder on your hard drive. Double-click the symcftpw icon. In the Server IP text box, type the IP address of the appliance. The default IP address of the appliance is 192.168.0.1. In the Local File text box, type a file name for the backup file. Click Get. When the Get process finishes, turn off the appliance, turn DIP switches 1 and 2 to the off (down) position, and turn on the appliance.

    Symantec recommends that you store backup files on removable media and in a safe location.

    To restore a Symantec Gateway Security 300/400 series appliance configuration Turn off the appliance, turn DIP switches 1 and 2 to the on (up) position, and turn on the appliance. Copy the symcftpw.exe utility from the CD to a folder on your hard drive. Double-click the symcftpw icon. In the Server IP text box, type the IP address of the appliance. The default IP address of the appliance is 192.168.0.1. In the Local File text box, type a file name for the backup file. Click Put. When the Get process finishes, turn off the appliance, turn DIP switches 1 and 2 to the off (down) position, and turn on the appliance.

    From Dave M

Django "Could not import settings 'settings.py'" error.

I've already done my best to follow the instructions at http://docs.djangoproject.com/en/dev/howto/deployment/modpython/, but a customer is transferring a website to us, and I suspect the original developer's methods were a bit, uh, different.

So, first the full error message:

ImportError: Could not import settings 'settings.py' (Is it on sys.path? Does it have syntax errors?): No module named py

Then, the apache configuration for the site:

<Location /acecoach/>
    SetHandler python-program
    PythonHandler django.core.handlers.modpython
    SetEnv DJANGO_SETTINGS_MODULE settings.py
    PythonOption django.root /acecoach
    PythonPath "['/home/acecoach/public_html/acecoach'] + sys.path"
    PythonDebug On
</Location>

Now, the "settings module" as far as I know, is located in /home/acecoach/public_html/acecoach/settings.py This file is readable by the apache server - I tested this by actually SU-ing to the apache user and reading the file from the command line.

I've also read similar advice on this error message, and found no useful help in this regard. It's driving me nuts. :)

  • Hey , i had same problem in mod_python too , but when i migrate to apache + mod_wsgi all of my proplems solved .
    why you didn't try mod_wsgi ?
    it's newer than mod_python and haven't such problems .
    but if wanna solve it , you can go to this address :
    http://stackoverflow.com/questions/1216340/django-newbie-deployment-question-importerror-could-not-import-settings-setti

    Graham Dumpleton : Yours was a different issue. This person was using Python file extension when they shouldn't have.
    From Ansari
  • Remove the .py file extension and add the project context to your settings module definition. Assuming that your project is called acecoach.

    SetEnv DJANGO_SETTINGS_MODULE acecoach.settings
    

    The Python documentation explains the reason simpler than I shall attempt to.

    http://docs.python.org/tutorial/modules.html#modules

    A module is a file containing Python definitions and statements. The file name is the module name with the suffix .py appended.

    http://docs.python.org/tutorial/modules.html#packages

    Packages are a way of structuring Python’s module namespace by using “dotted module names”. For example, the module name A.B designates a submodule named B in a package named A.

    Graham Dumpleton : Because they only have directory '/home/acecoach/public_html/acecoach' in PythonPath, they would actually use 'settings' and not 'acecoach.settings'. Rather than do that though, they should add '/home/acecoach/public_html' to PythonPath as well and keep using 'acecoach.settings'.
    From Dan Carley

How to install MPM worker on CentOS 5.3?

I use this command: yum install apache2-mpm-worker

with no success. Searching on google also not found

Thanks.

  • Uncomment the httpd.worker line in /etc/sysconfig/httpd:

    # The default processing model (MPM) is the process-based
    # 'prefork' model.  A thread-based model, 'worker', is also
    # available, but does not work with some modules (such as PHP).
    # The service must be stopped before changing this variable.
    #
    #HTTPD=/usr/sbin/httpd.worker
    

    Cheers

    From Jason
  • I've done that and restarted apache. I do an httpd -l and it only shows the prefork.c not the worker.c . I have checked the sbin directory and know for a fact the httpd.worker file exists. ANy other ideas?

Which version of Fedora for postgreSQL database server

I have very little experience with linux, though I have been given the task to set up a database server running a data warehouse based on PostgreSQL. My question is which version of Fedora I should choose. I thought of version 12, but I am worried since it is only one month old. If I run into a bug, it could take me months to see that it is not simply me doing something wrong. Another perspective is that if I chose version 11 or 10, then I could run into old bugs or issues that are so complicated for the user and that have been fixed or made easier in newer versions.

Also... Are there any "need to have" features in fedora 12 when running a datawarehouse?

Uptime for the server is not as important as performance.

  • Actually you should install the latest stable version of Fedora which I belive is 12. If you find that you need to updates for certain versions of software which would include your database you can always use follow the instructions for upgrading Fedora using yum found on the offical website.

    Good luck, and hope this helps some.

    From Chris
  • I wouldn't install Fedora, especially not 12. They made a change to the way it handles its' package management that basically allows non-root users to install anything in the repos. They may have patched it by now; I'm not sure, since it wasn't considered a bug, but rather an intentional policy change.

    If you want that Redhat goodness with long support cycles, take a look at Centos.

    David : Security is not an issue for me, but thanks for the tip.

ApplicationXtender?

Where can I find free technical documentation on EMC Application EXtender? This is a distributed system for "Storage, Organization, and Management of Business-Critical Information".

The IT team where I'm in has on his machines an installation of a server of this product, but I'm having troubles finding knowledge about it.

There are Microsoft Windows or Web-based clients. The administration is "easy", but i have no manual or documentation that can help me understand, monitor or fix...

  • You should be able to find some information on EMC's Powerpath site (http://powerpath.emc.com).

    There's not much to the application. You feed files into at and index them. It stores the indexed data in a database which it then queries to find the actual file location and name which it then displays.

    From mrdenny
  • Good afternoon. I am systems engineer for a company that is an OEM for the AX product line. If you woudl like to contact me off list, I might be able to help you with your search.

    From GaVinci

Top of Rack Switching, No Single Point of Failure

Assume that you have 1 rack in a reliable colo facility. The colo (obviously) has advanced chassis switches and can provide any reasonable manner of drops specified (but a limited number of drops). That is to say, you can specify two GB cat6 drops configured such that (specify additional config here).

Also assume that you have N (say 10) "servers" each with 2 GB ethernet ports. Each server needs to have one always accessible, routable ip address. That is to say, each server has an IP address WWW.XXX.YYY.ZZZ that should be pingable from any properly configured host on the internet.

What is the simplest logical and phsical network topology you can install top of rack such that there is no one single point of failure leading to ip connectivity issues between the servers and the gateway provided by the colo?

By simple, I mean, generally speaking, cheapest to implement using Cisco networking gear. That is a rough definition, but I think it should correlate well with the answer I am after.

  • You need two switches hook your colo/isps uplinks to each of these. Between the two switches set up two patches and enable rapid spanning tree on each of the switches. Doing this makes sure only one of the two patches are used, and only one of the uplinks.

    Then on each server set up bonding http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding This is assuming Linux servers. Other OSes has their own approach to this but most if not all supports this.

    THis setup relies heavily on rapid spanning tree and can be achieved with most hardware.

    For a more complicated setup, not really what youre looking for though, you can use switches with routing capability, terminate each server on its own vlan and usee VRRP or HSRP to make the servers gateway redundant and use rapid spanningtrees on each vlan to make sure it doesnt loop throught the two links between the switches. Then finally use BGP to handle automatic failover between upstream links. If you used switches with little memory you can have your ISP announce 0/0 rather than a full routing table.

    Hope this helps :)

    womble : Questioner is using WinXP. Channel bonding fun for the whole family.
    Rune Nilssen : Well, usually ethernetbonding is a server os feature ;]
  • A couple of refinements to Rune's suggestion:

    • if you use Cisco Catalyst 3750's, the stacking cable will eliminate spanning-tree between the switches, provide greater bandwidth and redundancy without using host ports.
    • If the colo has Cisco 6500 core's with the sup720 VSS blades, then you can do Etherchannel on the 3750's to virtual-Etherchannel on the cores (switch1 -> core1 & switch2 -> core2) further reducing spanning-tree
    • If supported by the colo:
      • use HSRP, VRRP or GLBP for gateway redundancy
      • implement UDLD aggressive on the uplinks (assuming they are fiber)
    • consider using RPS units for power redundancy on the 3750's
    From Peter

I have some questions regarding VPS and what I should order ?

I have a classifieds website, with around 50thousand ads put in every month. I don't have any more statistics than this. I use PHP and MySql currently, and I am about to install Sphinx or SOLR on whatever VPS I get.

I am about to order a VPS (virtual private server) from a provider, but need some answers on these questions first:

1- First is, if I order a VPS, even if it is a Linux Ubuntu OS, that means I can administer it from my windows computer at home right?

2- How would I know which version of Ubuntu I need, is the latest preferred?

3- I am very good with windows, and have no prior experience with Linux, is this a problem really?

4- Do I require anything I might have missed here that you know of, in order to maintain this website myself with the VPS account?

5- My VPS provider can charge extra in exchange of a service called DirectAdmin ControlPanel, does anybody know what this is and if it is something I need?

6- What is 'automatic failover' and do I need it?

Any good articles that you know of will help also...

Thanks

    1. Yes, though it will most likely be Ubuntu Server, without a desktop present, so you will have to administer it through Putty, or something similar, which offers a console interface.

    2. The latest is preferred, and 64-bit if you have a reasonably large amount of RAM (over 1GB), otherwise it won't be worth it on a small 512MB VPS, since it will slightly increase the amount of RAM used.

    3. Houston-esque like problem.

    4. (Some) Knowledge of Linux.

    5. It will offer point-and-click to add domains, upload files and configure databases.

    6. Depends, both on what it means and if you need it, though only you know if you need it, and only your provider knows how they define "automatic failover". I assume they might simply offer the possibility of starting up a new VPS connected to the same file system when your VPS goes down, essentially looking like a reboot after a crash to you, but if only Apache crashes, this sort of automatic failover won't take place.

    Joe Internet : @OP - I would suggest the long term support version of Ubuntu server, instead of the latest. I would also suggest that you install VirtualBox on your PC, and "practice" administering a local VM with Ubuntu server installed. It's an easy & safe way to gain some experience.
    ceejayoz : @Joe Internet LTS vs. latest depends a lot on what you're doing. If your project won't last beyond the end of support for the latest version, latest is fine. If you need newer versions of PHP, MySQL, etc., latest may be necessary.
    From gekkz
    1. Yes
    2. Yesish... I'd suggest the Long Term Support server edition
    3. Yes, that is a bit of a problem.
    4. The most basic method is to download PuTTy and use that to SSH into your virtual host. That would allow you to do practically anything you needed, but you may find yourself with some issues due to #3.
    5. It is a simplified interface that allows control of the server without using the command line. You do not need it, but it may help you since you lack Linux experience.
    6. Automatic failover... is a lot of things. I'm assuming it's an option offered by your hosting service. If that's the case, I can't answer. I don't know what that concept means for them. Odds are, you will survive if your site for some reason is down for up to ~8 hours. You should have some method of regularly backing up your server to another location that you can recover from if the site really dies.
    From Autocracy
  • 1,2,3 - Yes.

    4,5,6 - Do you require a VPS at all? Or do you simply want to host a website? If you don't do any crazy background processing, custom deployment or whatever else - there's no point in getting a VPS really. If you get a proper hosting account you'll be able to change your site whenever you want and others will take care about the whole system. With a VPS you'll have to care about backups, database maintenance, system updates, etc. You'll either have problems with keeping the host up and running, or with keeping script kiddies out.

    Think about a proper hosting account instead. Or if you like windows, then just get a VPS with Windows Server and IIS + php installed.

    From viraptor

path to ffmpeg in linux hosting server

what will be the path to ffmpeg on linux server.......

  • It might be in the bin, etc... It kinda depends where it got installed.

    Use the find function to get it for sure.

    From marcgg
  • Try whereis ffmpeg on the command line.

    marcgg : +1 because whereis is cool ( http://linux.about.com/library/cmd/blcmdl1_whereis.htm ). Thought ffmpeg needs to be listed as a command in order for it to work isn't it?
    Raphink : Yes, but `which ffmpeg` is faster to just find the path of a binary :-)
    From Pekka
  • On a hosted Linux server, it may not even be installed. Probably depends on your hosting package.

    But if it is installed, /usr/bin (for the executable) and /usr/lib (for the libraries) would be the first place I'd look.

    Also, locate ffmpeg may be a helpful command to try.

  • If ffmpeg is in the path, use which ffmpeg to find its path.

    If it's not in the path, use locate ffmpeg.

    The fact that it's a server should not change the path where it is installed if you installed it with packages, so it should probably be in /usr/bin/ffmpeg.

    From Raphink
  • try 'locate', 'which', or 'whereis' ... If all fails, then 'find / | grep ffmpeg'

    From joet3ch

Under what condition(s) is acceptable to do an "automatic reboot" on Linux

I have a 24/7 system with a couple of semi autonomous nodes (embedded x86 minipc) running Ubuntu jaunty (9.04). Each of them need network connection to gather information to operate. I use monit to restart some services if they're down for some reason and I monitor each node using Nagios 3, but I don't know a good way to evaluate (automatically) system sanity under Linux. To be more specific, in case the network connection is having problems (e.g. the network driver isn't working properly), how can each node evaluate its "health" to determinate that it needs a reboot (sorry for not being more specific)? Do you people have opinions/experience about it?

Thanks in advance!

  • Do function testing (you can write Nagios checks for that, if there aren't anyone available, it is not that hard if you now some scripting language). Test that your services are reachable and functioning correctly from the Nagios machine.

    The node itself can try to reach your Nagios machine and if it's unreachable just restart itself, but it's probably more preferred to run on hardware that have good drivers available in the first place...

  • I don't know of a situation when an automatic reboot is necessary and can be launched from the machine itself. In the worst case, you can set a watchdog that will reboot the machine if it's stuck. In most situations though, it is preferred to just restart services. If you want an intelligent way of doing that, I'd use puppet to manage dependencies between files, packages and services.

    rodjek : Agreed, automatic reboots smells like a way to fix the symptoms of a problem temporarily, rather than the underlying issue.
    From Raphink
  • Do you people have opinions/experience about it?

    I think you're anticipating and toying with black magick that is commonly associated with Windows.

    I've never seen and would be very suspicious of connectivity issue that can be reliably fixed by rebooting. Even if it were to provide a temporary fix, I'd want to be pretty sure of the cause and resolution before bringing the machine back into service.

    From Dan Carley
    1. Use a hardware watchdog to monitor and reset the systems if they hang.
    2. Whatever the machines are doing, use monit or nagois to monitor how many requests/second or minute are performed and warn a human if that number drops below a certain threshold.
    From Alex Holst
  • What about just bringing the interface down and then back up? It does fix most issues that rebooting would fix.

    And just do it from cron, or use a script to check connectivity, and if things are bad, interface down and up, if that doesn't fix it, reboot.

sql server 2008 full text

just migrated to sql server 2008 for some reason the enable full text checkbox for all my databases are grayed out. any idea why?

i do not see full text in services just full text filter daemon launcher

  • Did you install the fulltext feature during the migration? Its an optional component so by default will not be installed. You can check if fulltext is installed by querying the IsFullTextInstalled SERVERPROPERTY.

    Also, how did you migrate? There are specific steps for full-text upgrade that have to be done, see Full-Text Search Upgrade.

Offline defragmentation of Exchange 2003 mail store

What is the best way to run offline defragmentation of Exchange 2003 mail store if you don't have enough disk space on the server. Can you attach an external drive and store the temporary defrag file and then just copy the defrag file to the path where the database is located.

  • You can redirect the temporary files to another drive using the /T option, or indeed if you really have to you can actually take the database files (you want the EDB and STM files) and the ESEUTIL tool and run it on another box which does not even have Exchange on. Obviously transferring everything all takes time while Exchange is offline, so this is a last resort. How to run Eseutil on a computer without Exchange Server

    How much free (white) space do you have in the database? The online tools are pretty efficient normally unless you have done a big bunch of mailbox move or delete operations.

    Evan Anderson : Unless you have gobs of whitespace I wouldn't worry about an offline defrag.
    From AdamV
  • Another option because defrags can be a little scary - just create another database and move all the mailboxes to it. You then don't have a big long outage for everyone and there's very little risk. You now have a shiny new database without too much defragmentation.

    Evan Anderson : You lose single-instance storage when you do this. Depending on your data this may result in a larger database at the end of the process
    From

mod_rewrite add and switch directory

How to change the url pattern with mod_rewrite first from

domain.de/images/myfile.jpg

to

domain.de/directory/images/myfile.jpg

and then finally to

domain.de/images/directory/myfile.jpg

My rules so far

RewriteCond %{HTTP_HOST} ^(www\.)?domain\.de$
RewriteCond %{REQUEST_URI} !^\/directory
RewriteRule ^(.*)$                                              directory/$1 [NC]


RewriteCond %{REQUEST_URI} ^\/directory\/images
RewriteRule ^\/directory\/images\/(.*)$                       images/directory/$1 [qsappend,L]

The first part is working but the exchange of directory fails

  • After processing the second rewrite rule the first rewrite rule gets into an endless loop. To interrupt I had to change the first rule to check if "directory" is part of the url at all, like this:

    RewriteCond %{HTTP_HOST} ^(www\.)?domain\.de$
    RewriteCond %{REQUEST_URI} !\/directory\/
    RewriteRule ^(.*)$                                              directory/$1 [NC]
    
  • Have you tried:

    RewriteCond %{HTTP_HOST} ^(www\.)?domain\.de$
    RewriteCond %{REQUEST_URI} !^/directory/
    RewriteRule ^(.*)$                                              directory/$1 [NC]
    

    Note the RHS is !^/directory/, with the anchoring ^.

    (Is the HTTP_HOST check really necessary? Can't you put this in the VirtualHost section for domain.de?)

    The second rewrite need only be:

    RewriteRule ^/(directory)/(images)/(.*) /$2/$1/$3 [QSA,L]
    

    I don't think you need the RewriteCond in this clause; if the URI doesn't begin with /directory/images then the rule won't match, so the RewriteCond is redundant.