Thursday, January 27, 2011

Add a remote computer into my network neighborhood by ip address?

Hi All,

Some years ago, I was working with a contractor who needed me to access a few things on his computer. I dimly remember that he had me add his ip address into a file that was buried about 4 directories deep in the windows directory (I think this was windows 2000 on both ends), and magically his computer then showed up in my network neighborhood exactly as if it were in our local lan.

For the life of me, I don't remember what file that was, but I remember clearly that simply adding an ip to it was all that was necessary.

Anyone know what that was?

Thanks!

  • lmhosts, somewhere under the Windows system32\ directory.

    Eli : That's it - thanks!
    John Gardeniers : The hosts file would be a more likely and common candidate.
    Ignacio Vazquez-Abrams : Except that `hosts` is not used for that.

Windows server 2008 r2: Backup notifications

Hi serverfault,

Is it possible to have Windows server 2008 r2 notify, preferably by e-mail, when a backup failed or had warnings?

We've had the issue of having backups that are incomplete (luckily detected before there was any need for the backups), but we don't want this to happen again. Knowing ourselves in that we probably will forget to log into the server every day to check this, I'd want a notification...

Thanks in advance.

  • You can use Task Scheduler to monitor events in Microsoft-Windows-Backup/Operational log, and email notification to you on errors and/or success.

    From Quidrick

Thomson TG585v7 causes random disconnects

Hello,

using Thomson's TG585v7 (latest firmware) as an internet gateway with the following setup:

  • Windows XP @ Lenovo T61
  • DHCP
  • WPA2 crypted WLAN

brings unfrequent internet disconnects. The LAN is up anyway, it's just the internet over the WLAN. The internet connection on the gateway is up, too.

There are two ways to solve this problem:

  1. Repair/Disable+Enable the WLAN in Windows
  2. Switch off+on the WLAN on the notebook which implies the first.

What could that be?

  • Try:

    • one of the different (hopefully unused) channels
    • antenna(s) placement
    • consider not using the default SSID (if doing so).
    From
  • I've experienced the exact same problem on the exact same hardware across multiple firmware revisions with a different client system and OS to you. WLAN access through the 'net disappears, or performs unbelievably slow, or the WLAN drops out and reconnects intermittently. One other symptom was being almost unable to reach the admin interface for the device via the WLAN.

    I tried disabling the intrusion detection and firewall elements of the router (via telnet), and messing with the wireless channels, and QoS... nothing fixed it.

    In the end I swapped the unit out for a Linksys WAG200G and have had approximately zero problems since.

    I personally believe the model is junk, and there's little to do to remedy it's issues.

    If you're in NZ and on Telstra the only thing to beware of when setting up a replacement is that VPI/VCI settings need to be manually set at 0/100. Username and password can be anything you like, it all works.

  • user48838, I've tried everything with no outcome.

    Chris Thorpe, okay if nothing helped you, I probably will buy a new one as well. This model seems to be crap.

    BTW: It's my second of that model, so it looks like this is a permanent problem.

    No, it's AT here. Thank you for your answers.

Managing reserved Amazon EC2 instances

I'm writing an app and had been paying an hourly rate for my EC2 instance, as I've needed to test. I decided I should just pay for a reserved instance to save money in the long run, but now that I have one, I'm confused about how I'm supposed to manage it. In the "Instances" section of the EC2 management console, I can see the instances that I've launched in the past, and I can stop/start them as I see fit. However, it seems the only way to view my reserved instance is to use the "Reserved Instances" drop-down, but this only seems to let me view them, but nothing else...

So, my question is, how can I do the same thing with my reserved instance(s) that I've been doing with my hourly instance(s)? I basically just want to associate my elastic IP with my reserved instance and install my server image on it.

  • How do I purchase and start up a Reserved Instance?

    You purchase an EC2 Reserved Instance by calling the PurchaseReservedInstancesOffering API method. Launching a Reserved Instance is no different than launching an On-Demand Instance. You simply use the RunInstances command or launch an instance via the AWS Management Console. Amazon EC2 will optimally apply the cheapest rate that you are eligible for in the background.

    How do I control which instances are billed at the Reserved Instance rate?

    The RunInstances command does not distinguish between On-Demand and Reserved Instances. When computing your bill, our system will automatically optimize which instances are charged at the lower Reserved Instance rate to ensure you always pay the lowest amount.

    http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/index.html?FAQs_Reserved_Instances.html

    From Rob Olmos
  • To elaborate on Rob Olmos's post, instances are instances. What a "reserved instance" buys you, is the right to pay less for any instances you are running over time.

    Let's go for a super-simplified example. You run two instances 24/7.

    April
    2 x Small Instance, 10c/hr ($72 ea): $144

    May (note: this May has 30 days, due to a decree by the Pope)
    2 x Small Instance, 10c/hr ($72 ea): $144

    June
    1 x Instance Reservation, ordered on the first of the month ($227.50): $227.50
    1 x Small Instance, 10c/hr ($72 ea): $72
    1 x Small Instance, 4c/hr ($28.80 ea): $28.80

    If you now turn off all your instances, you don't get any money back on your reserved instance (it isn't necessarily being very cost-effective for you at the moment); but if you later turn a matching instance on (in the correct Availability Zone) then you will be charged at the lesser rate.

    From crb
  • You affectively ignore fact that you have a Reserved Instance, it just sits in the background and Amazon will automatically bill you at the lower rate where possible. If you run more instances than you have reserved, the additional will be charged at the higher (on demand) rate.

Failover clusters of WPAR with HACMP in AIX 6.1?

Hi,

Is it possible to create a failover cluster using Workload Partitions (wpar) in AIX 6.1?

I want to create a two node cluster such that each of the nodes is a wpar and in event of a software failure the application fails over from one wpar to another wpar in the cluster.

I know that we can do so in Solaris 10 with Zone cluster feature, but not sure about AIX.

TIA.

  • Hi,

    As it turns out, I found out that it is not possible to create a cluster of WPARs. This is possible for Logical Partitions (LPARS), but LPARs have their own overhead, unlike WPARs.

    HTH

    From UnixAdmin

Encrypted file system on Ubuntu cloud server

Hi

We have a cloud server (AWS) running Ubuntu 8.04. All the employees (around 15 people) in the company have system accounts in the server and all are sudoers. We want to provide a way for all the users to store private data which should be password protected and not viewable by others using their root privileges. And there should be an easy mechanism for Windows (Vista/XP) users to copy data to/from the server (click/drag/copy/paste etc). Any solution guys?

-Geos

  • You have two primary (and slightly conflicting) requirements:

    1. Encryption on a per-user, not per-system bais.
    2. Transparent operation from the user's point of view.

    Are your users reasonably Linux-capable? I'd hope so if they have root privs...

    You can't just create encrypted file systems with cryptoloop on the Ubuntu server, as as soon as one user mounts it, every other user with root would be able to see the mount in its decrypted form.

    One option is the commercial PGP product, and use PGP Net. Store the .pgd files on a samba share that is hosted on the Ubuntu server (I've done essentially the same but on areal Windows server share - works, but it's not multi-user). This is about as transparent as it gets - the user mounts the .pgd as a drive letter in Windows, then just uses it like any other network drive. I don't know that it will be terribly fast though, and you've still got the issue of how to securely allow Windows networking ports between your Windows machines and the server. You could VPN tunnel it (even with no encryption on the tunnel as I guess it's already encrypted), but that's going to be interesting unless you have a local box that can act as a VPN gateway - IPSec under Windows isn't pleasant to configure.

    There's probably a time-limited trial for PGP Net I'd have thought, so it might be something you can try for just some time input.

    One last (and not terribly pleasant) option I can think of might be to do something with a base Ubuntu server, and a lightweight virtualisation that will work on a server in AWS (Xen VMs I believe?), so that each user gets their own 'jailed' area and can mount their cryptoloop filesystem inside that, then either scp files in and out, or run Windows networking over IPSec or an ssh tunnel etc. I'm not entirely sure how well something like User Mode Linux would work in keeping each virtual session separate from others when users have root to the base machine - there's still some scope for tinkering via the base environment, but if your users don't trust each other to that degree, then they should be on their own VMs anyway!

  • I think what you want is ecryptfs - an encrypted file system supported by the linux kernel, and which is well integrated into Ubuntu, although the main integration went into versions of Ubuntu after 8.04.

    For instructions for setting it up for Ubuntu 8.04, see this guide or this guide.

    So this should meet your requirements with one or two caveats. When a user's private directory is not mounted, nobody will be able to see the files without knowing that user's passphrase. However, when the private directory is mounted, other users could change to be that user using sudo su username and could then read their files.

    As to sharing with windows, the secret directory would have to be mounted by the user, and also be available as a network share over samba. I'm afraid I don't know how to set up samba so there are multiple shares and only one user can use each share, but if you ask on http://serverfault.com/ you should get some help for that. (And possibly this question belongs on serverfault).

    grawity : "other users could change to be that user using `sudo su username`" - or they could just patch ecryptfs or `sshd` to write passwords to some dark corner. Things like that are very much based on trust. (also, it's `sudo -u username -i`)
  • hey guys,

    ive found a nice blog post on private data cloud. http://bigdatamatters.com/bigdatamatters/2009/09/private-cloud-eucalyptus.html

    hope, its helpful :)

LAN->LAN IP translation (for TortoiseSVN + Artifacts + Buffalo router)

Here's my scenario:

I've got a VisualSVN server on my main dev box @ home. I'm also using Visual Studio 2010, TortoiseSVN, VisualSVN client (for source control), and Versioned 'Artifacts' (for bug tracking).

(I had to modify the fake URL's below to use only one slash because as a new user, I can't post more than one real URL.)

I've got my Buffalo AirStation WHR-HP-G300N router properly configured so my business partner can connect to the SVN server. I have port forwarding enabled for the internet-side IP address (like http:/99.888.77.66:443) which gets forwarded to an internal IP (like 192.168.11.6). This part is working great.

The problem I'm having is with the integration piece between TortoiseSVN and my bug tracking system. I need to provide a bugtraq:url property, but I haven't been able to get relative paths to work. So I'm forced to use an absolute URL. On my end, I need to use the name of my server (for example: bugtraq:url = https:/my-server/svn/bla..), but this doesn't work for my partner. He needs to specify the IP address (for example: bugtraq:url = https:/999.888.77.66:443/svn/bla...)

Is there a way to configure my router such that the IP address for this parameter gets re-routed/re-mapped to "https://my-server" if the request originates from the LAN itself? My router's software supports LAN->Internet and Internet->LAN, but I don't see LAN->LAN.

  • As suggested by someone on StackOverflow.com, the solution was to edit the HOSTS file. In my case, I needed to get my business partner to edit his HOSTS file after I changed the bugtraq:url on my end to:

    https://my-server/svn/Bla...

    Then he modified his HOSTS file as follows:

    999.888.77.66 my-server

    Works like a charm.

best apache2 config for this server

  • 1.200.000 request/day
  • 30.000 <> files/day
  • 5500 unique ip
  • wordpress + cache
  • debian lenny + php + suhosin patch + mysql
  • 4GB RAM
  • single sata disk
  • Intel(R) Core(TM)2 Duo CPU E6550 @ 2.33GHz

I'm using the default Apache configuration, but sometimes I get a zombie Apache process.

Any ideas?

My config can be read at pastebin.ca/1934046.

  • Are you using the prefork MPM? Probably, so I will go with that.

    MaxSpareServers and StartServers are kind of too far away from each other. The other thing is, see what modules you have enabled and let only the ones you need, so you will have smaller apache2 processes: smaller processes mean more processes in less memory.

    The other thing I can suggest you is to monitor the number of apache processes you usually have and tune from here. If you always run on a low 30 and high 80, you can keep the spare servers higher.

    Also, take a look on the keep alive configuration, you may want to get your MaxKeepAliveRequests higher and maybe get your KeepAliveTimeout lower. If you start to get too many zombies you can maybe turn off keep alive entirely or set the number of requests to a slow number (so the apache children will be recycled faster) but there's a performance hit.

    Anyway, there's no way to exactly tune it without monitoring and seeing if there's any bottleneck to be solved on apache or the OS. Take a look at vmstat and check the number of processes, what they are doing and so on so you can identify choke points.

    From coredump

Looking for a good freelace site

We want to find a freelance to outsource a certain Linux job.

Interix/SUA SSH server any good?

There are a lot of homebrew SSH servers (some free, some paid) out there, but I’ve always stuck to Cygwin’s port of OpenSSH because:

  1. Despite all of its strange properties, I am familiar with Cygwin, and more likely to know how to debug it when things go wrong, and

  2. It gives me only a slightly braindead shell (i.e. not cmd.exe) to work in.

This was the state of affairs several years ago, when I quit using Windows for Linux. Well, I’m back now, and some things have changed:

  • Cygwin is still braindead. In a variety of colorful ways.

  • MinGW’s msys utilities are feature complete enough to give a reasonable environment for a developer interested in doing native Windows development in a Unixy skin. However, it still doesn’t come with an SSH server.

  • Microsoft has included Interix (also known as Subsystem for Unix-based Applications) in recent versions of Windows.

I've been using MinGW to do most of my native Windows development these days, and I am quite happy to report that it is here to stay. However, the lack of an SSH server had been killing me, and most of the other options seemed insufficiently compelling for me to stop using Cygwin’s OpenSSH (which also happens to give me a ton of other useful packages which will do the Right Thing™ as long as I’m not compiling C.)

But SUA is possibly the thing that will let me ditch Cygwin forever! In particular, the SUA community appears to have a version of OpenSSH server. So my question, does it actually work, and is it sufficiently on track to become the de facto SSH implementation (much like Remote Desktop Services became the de facto remoting application for enterprise Windows) that it is worth switching to?

  • My university uses OpenSSH for SUA to get SFTP access on windows server 2008 systems. The thing you will have to worry about is SUA is UNIX not linux (logs are in different places - for example) and also tools created to protect against SSH brute force attacks such as Denyhosts and fail2ban simply don't exist to my knowledge. Overall I would definately recommend installing and looking at it but keep the security issue in mind.

What would cause a 500 Internal Server Error when accessing the Report Manager url in SQL Server Reporting Services 2008 R2?

I'm very new to Reporting Services and I'm not even sure what to ask. I was given a server where it was installed. I can run the Reporting Services Configuration Manager and connect. However when I attempt to access the urls (web service and report manager) I get a 500 error. I don't see anything in the event viewer. Are there log files somewhere? I'm suspicious of the IIS setup. In particular, the virtual directories. How should these look? I know this question is broad, but general guidance is appreciated.

Update: After examining the logs, here is the first error:

servicecontroller!WindowsService_0!358!09/07/2010-13:30:52:: e ERROR: Exception caught loading and setting code permissions policy level: System.NotSupportedException: This method explicitly uses CAS policy, which has been obsoleted by the .NET Framework. In order to enable CAS policy for compatibility reasons, please use the NetFx40_LegacySecurityPolicy configuration switch. Please see http://go.microsoft.com/fwlink/?LinkID=155570 for more information.
   at System.AppDomain.SetAppDomainPolicy(PolicyLevel domainPolicy)
   at Microsoft.ReportingServices.Library.ServiceController.SetAppDomainPolicy()
library!WindowsService_0!358!09/07/2010-13:30:52:: e ERROR: ServiceStartThread: Exception caught while starting service. Error: System.NotSupportedException: This method explicitly uses CAS policy, which has been obsoleted by the .NET Framework. In order to enable CAS policy for compatibility reasons, please use the NetFx40_LegacySecurityPolicy configuration switch. Please see http://go.microsoft.com/fwlink/?LinkID=155570 for more information.
   at System.AppDomain.SetAppDomainPolicy(PolicyLevel domainPolicy)
   at Microsoft.ReportingServices.Library.ServiceController.SetAppDomainPolicy()
   at Microsoft.ReportingServices.Library.ServiceController.ServiceStartThread(Object firstStart)

After fixing the CAS issue, I now find this error in the log (after my initial attempt to access the web service):

appdomainmanager!DefaultDomain!f6c!09/07/2010-14:32:20:: e ERROR: AppDomain ReportServer_11 failed to start. Error: The configuration system has already been initialized.
library!DefaultDomain!f6c!09/07/2010-14:32:20:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ReportServerHttpRuntimeInternalException: Failed to create HTTP Runtime, Microsoft.ReportingServices.Diagnostics.Utilities.ReportServerHttpRuntimeInternalException: An internal or system error occurred in the HTTP Runtime object for application domain ReportServer_11.  ---> System.InvalidOperationException: The configuration system has already been initialized.
   at System.Configuration.ConfigurationManager.SetConfigurationSystem(IInternalConfigSystem configSystem, Boolean initComplete)
   at System.Web.Configuration.HttpConfigurationSystem.EnsureInit(IConfigMapPath configMapPath, Boolean listenToFileChanges, Boolean initComplete)
   at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters, PolicyLevel policyLevel, Exception appDomainCreationException)
   at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters, PolicyLevel policyLevel, Exception appDomainCreationException)
   at System.Web.Hosting.ApplicationManager.CreateAppDomainWithHostingEnvironment(String appId, IApplicationHost appHost, HostingEnvironmentParameters hostingParameters)
   at System.Web.Hosting.ApplicationManager.CreateAppDomainWithHostingEnvironmentAndReportErrors(String appId, IApplicationHost appHost, HostingEnvironmentParameters hostingParameters)
   at System.Web.Hosting.ApplicationManager.GetAppDomainWithHostingEnvironment(String appId, IApplicationHost appHost, HostingEnvironmentParameters hostingParameters)
   at System.Web.Hosting.ApplicationManager.CreateObjectInternal(String appId, Type type, IApplicationHost appHost, Boolean failIfExists, HostingEnvironmentParameters hostingParameters)
   at System.Web.Hosting.ApplicationManager.CreateObject(String appId, Type type, String virtualPath, String physicalPath, Boolean failIfExists, Boolean throwOnError)
   at ReportingServicesHttpRuntime.RsHttpRuntime.Create(RsAppDomainType type, String vdir, String pdir, Int32& domainId)
   --- End of inner exception stack trace ---;
  • SSRS does not rely on IIS (from SQL 2008 onwards), it uses his own http.sys enviroment to host the web service and the report manager.

    Refer to:

    C:\Program Files\Microsoft SQL Server\MSRS10_50\Reporting Services\LogFiles
    

    To the possible cause.

    With so little detail it is hard to know, If you come up with an error in the logfiles, post it to help further.

    From Mikeware
  • In the end, I installed on a server without .NET 4.0 and I was able to get it working.

    From bennage

Change DNS (windows server) ; math IP with domain

I have a windows server 2003 and the Ip of server is :74.62.x.x . and now i buy a .com domain . and now i want match the ip with domain . what am i going to do ? Thanks in Advance

  • You can create an "A" record under the SOA for your domain. Your registrar (where you purchased your domain) is a good starting point in sorting that out.

    From

Weird symbols always appearing in command line (putty / zsh)

I've had this problem for a while and I hope it's a pretty easy fix:

In various places, I'll see weird symbols appearing in the command line. Such as 'â'. It seems like it's replacing some other character..? For example, when I do

prompt -p

I'll get lots of 'â symbols. Such as:

fade theme with parameters `white grey blue':
ââââuser@hostââââ Sun Sep 05 05:57:20pm
_cwd}~/ command arg1 arg2 ... argn

user and host replaced my actual user / host, but everything else looks exactly like that.

I've also seen those symbols in g++ compiler messages, such as:

test.cpp: In function âint main()â:
test.cpp:6: warning: unused variable âxâ

What's going on and what can I do to fix it? The shell I'm using is zsh (but I also see the symbols in bash). I'm using ubuntu and putty. Thanks!

  • Your PuTTY character set and your terminal character set don't match. Use echo "$LANG" and look after the period for what it should be, and set it in PuTTY.

  • Your terminal is outputting characters encoded using UTF-8, but PuTTY is interpreting the bytes it is seeing in another character set (probably ISO-8859-1).

    You can change PuTTY to use UTF-8 by changing the 'received data assumed to be in which character set' option under Window\Translation:

    From Phil Ross

AclPermissionsFacet fault install SQL-2008-R2

While attempting to do an installation repair of SQL-2008R2, I'm failing the pre-check rules.

Module that is failing is AclPermissionsFacet - with this message "The SQL Server registry keys from a prior installation cannot be modified. To continue, see SQL Server Setup documentation about how to fix registry keys."

In the log file "Detail_GlobalRules.txt", I've been able to find the following error messages -

  • 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSearch.

  • 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\SQLServerSCP.

  • 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQLServer.

  • 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\SQLServerAgent.

When I look at these keys in the registry, all of their permissions are blank. My problem is that I cannot find any good information on how to reset these keys. This is on my new home dev and I think during the migration from my previous machine, these settings got corrupted on my new box. In reviewing the web, there doesn't seem to be good infomration. And what there is suggests using subinacl.exe. But after trying it and seeing it is an XP based program, I'm at a loss on how to continue.

Configuration - Windows 7/64bit Home Edition, SQL2008R2, 6gb ram.

Suggestions?

Su

Is it possible to determine how may servers is connected to an anycast address?

It's possible to set up several servers with the same IP-address, on the internet, using anycast address. Using routing protocols , the server geographical closest to you responds to queries.

I simply wonder if it is possible to determine how many servers is connected to one IP address?

Also is it possible to find the other ip addresses for them?

  • You could probably check some BGP looking-glass servers around the world and get a pretty good idea of how many endpoints are advertised for the ASN.... whether or not they are being anycasted or represent a multi-homed site is something you can't necessarily determine though.

    From gabbelduck

problem with redirecting

I have something like this myhost.dyndns.org. When I put myhost.dyndns.org in browser then redirect me to ADSL modem IP (10.0.0.138) but I try to redirect me on localhost (Apache server). How to solve this

  • You must set a port forward on your router/modem at least for port 80 to the (local) IP for your server.

    Beware, this can be dangerous as anyone can access this website in your private network.

    Ognjen : How to do this? You mean set NAPT?
    SvenW : Depending on your setup, something like NAPT might be available for you. In most cases though, you will have to look for something called port forward or a variant thereof. Actually, NAPT is a combination of normal NAT and port forwarding, if I remember this correctly.
    Grizly : Check out http://portforward.com/
    From SvenW

Simultaneously uploading files to multiple ftp

I hve two remote servers : suppose A and B

i hve my own local desktop through which i want to transfer my file simultaneously to Server A and B. Is there any way or any tool through which i can do this?????

  • If you're using ssh, then parallel-ssh will do it.
    (debian package pssh)

    If you really are using ftp, then I suggest thinking about using ssh.

    Most windows ftp clients will let you script them. example: http://winscp.net/eng/docs/scripting

    From DerekB
  • It depends.

    If you use ftp, and if by "simultaneously" you mean "with a single command", then just use curl to upload the file to several ftp servers

    $ curl -T my_local_file -u userid:password ftp://servera/path/ ftp://serverb/path/
    

    For small size files this will be near enough simultaneous anyway.

    If the files are larger, or you really need closer synchronization, you could run several instances of curl in the background

    $ for srv in servera serverb; do curl -T my_local_file -u userid:password ftp://${srv}/path& done
    

The Real World - Systems Architect and Systems Administrator - What is the difference\what am I doing?

I realize that terminology in this field can be slightly ambiguous and that there is obviously some overlap in roles, but hopefully some context will help. I had much fun this last six months designing and implementing a new workflow from the developers to the end users, and for the persons maintaining the system for my University.

This consisted of implementing database servers, web servers,a project management system, and a Mercurial repository system. I also have been tying different parts of the system together with some automation to improve workflow and make it easier on the developers. I was basically in charge of most of the systems we chose (I was a developer for the same team for the 1.5 years prior to this), and working out all the details. I am hoping to have time to further upgrade this into a Puppet driven solution with easy support for clustering (fault tolerance) in mind.

Does this fit under Systems Administration, Systems Architecture, or something else all together? I have had a phenomenal amount of fun doing this (it's exciting to goto work every day), and I want to know the proper area to set my sights on.

I've already read the Wikipedia posts. Mainly just want to know what the above duties primarily fall into, because frankly my limited knowledge makes the Wikipedia reference feel ambiguous in the context of the above.

  • Well...

    In Theory:

    • Systems Architect interacts with end users and designs the basic architecture of the system. Primarily this is generating requirements, not specifications.
    • Systems Engineer designs and builds stuff (more like what you described). This would include the actual specifications.
    • Systems Administrator keeps it all running later.

    In Reality:

    • The three job titles are frequently used interchangeably in IT and there's usually some aspect of all 3 in the jobs.

    I don't hear "systems architect" used outside of sales organizations typically... Usually "engineer" is somebody that designs and builds for another organization and they later run it themselves; while "administrator" works within the organization. What you describe absolutely sounds like work a "system administrator" does to me, and specifically sounds like the kinds of things I've done in multiple jobs where I had the title "System Administrator" or something like that.

    Note also: "architect" and "engineer" are protected terms in some jurisdictions and their usage by people without the proper credentials can be frowned upon.

    From freiheit

IPsec/L2TP VPN with OSX client: xl2tpd reports "maximum retries exceeded"

I'm following this guide for getting an IPsec/L2TP VPN server set up on a Gentoo machine, and I'm having trouble getting an OS X client to connect. From the logs, I believe I'm making an IPsec connection OK, but xl2tpd is refusing to go any further in the connection process. My setup (names changed):

  • Home server is directly connected to the Internet - no NAT - at example.com
    • vpn.example.com is an alias for example.com
    • Both addresses are provided through a dynamic DNS service - example.com's IP is not fixed
    • Home server's internal subnet is 192.168.1.0/24
  • OS X client runs 10.5.6 and has a dynamic IP (is a "roadwarrior")

My config files are as follows:

ipsec.conf

version 2.0

config setup
        nat_traversal=no
        nhelpers=0

include /etc/ipsec/ipsec.d/examples/no_oe.conf

conn L2TP-PSK-NAT
        rightsubnet=vhost:%priv
        also=L2TP-PSK-noNAT

conn L2TP-PSK-noNAT
        authby=secret
        pfs=no
        auto=add
        keyingtries=3
        rekey=no
        type=transport
        left=%defaultroute
        leftprotoport=17/1701
        right=%any
        # Using the magic port of "0" means "any one single port". This is
        # a work around required for Apple OSX clients that use a randomly
        # high port, but propose "0" instead of their port.
        rightprotoport=17/0

ipsec.secrets

: PSK "testkey"

xl2tpd.conf

[global]
port = 1701
access control = no
debug avp = yes
debug network = yes
debug state = yes
debug tunnel = yes

[lns default]
ip range = 172.21.118.2-172.21.118.254
local ip = 172.21.118.1
require chap = yes
refuse pap = yes
name = LinuxVPN
pppoptfile = /etc/ppp/options.xl2tpd
ppp debug = yes
length bit = yes

options.xl2tpd

ipcp-accept-local
ipcp-accept-remote
ms-dns  192.168.1.27
noccp
noauth
crtscts
idle 1800
mtu 1410
mru 1410
nodefaultroute
debug
lock
proxyarp
connect-delay 5000
silent

And the log entries:

*snip*
Sep 05 13:40:32 [pluto] "L2TP-PSK-noNAT"[14] 137.112.114.88 #28: STATE_QUICK_R2: IPsec SA established {ESP=>0x0cb56f8c <0x319c29ff xfrm=AES_128-HMAC_SHA1 NATD=none DPD=none}
Sep 05 13:40:39 [xl2tpd] Maximum retries exceeded for tunnel 23214.  Closing._
Sep 05 13:40:46 [xl2tpd] Connection 70 closed to 137.112.114.88, port 63835 (Timeout)_
*snip*

Why can't I get xl2tpd to accept the connection? I can't even find the relevant xl2tpd log files to continue debugging - all I get are those two lines in the syslog.

  • Figured it out. I'm no expert, so I don't know why this works, but I was able to get a connection by adding the following lines to the conn L2TP-PSK-noNAT section of ipsec.conf:

    leftnexthop=%defaultroute
    rightnexthop=%defaultroute
    From Tim

MySQL binlog not logging

Have added the directive in my.cnf

log-bin

Both the binlog and index files are created, but seems the binlog file remain unchanged even I massively insert data into the db.

  • One possibility might be that if you are inserting into a transactional table but have not committed the data yet with a COMMIT statement all those inserts would be cached.

  • This may be a wild guess, but it sounds like this is either a secondary master or slave host? Add the following line to your my.cnf and restart mysqld:

    log-slave-updates
    

Using APC compiled with mmap for session management

Excuse my ignorance on mmap.

If APC is configured to use mmap does that mean virtual memory will be mapped to disk and I will not be using physical memory ?

As my needs are very basic, rather than using memcached to implement memory based session management I wrote a custom session handler for APC but it appears to me a mmap based installation of APC is still going to be writing and reading from disk and my custom session management will be really be no better, or faster, the the default file session management. Am I misinterpreting mmap ?

Thanks!

  • Basically, mmap is very smart about memory usage. You map a file to memory using mmap, and only the bits of the file that you actually read get into memory. And even better, if multiple processes mmap the same file, it's the same area of memory. When you write to that RAM, mmap won't immediately write to disk, it may hold onto that version of things for a while.

    • mmap for a single process can reduce disk I/O. If you change the same block twice, it's possible it only gets written to disk once.
    • mmap can save physical memory, because only blocks you access get read into memory. (which could be an disk I/O savings, too)
    • mmap is totally awesome if you have multiple processes using the same file via mmap. They share a single copy in memory of that file. This can be used for interprocess communication, and is how shared libraries on Linux only use a single copy of the library in memory.

    The only way to be certain is to test (benchmark), but I would expect mmap to work for session handling, as long as you handle locking properly.

    Steve : freiheit, much obliged, thanks for the very concise explanation.So my take is, for session reading I'll typically be pulling from memory but if I'm constantly updating my sessions
    Steve : ... the difference will not be as significant depending on how the write buffering is set up. As you suggest, my best best is to do some testing. Thanks again.
    From freiheit

Mysql- How can I find the memory usage of indexes

mysqltuner tells me to increase key_buffer_size dramatically. I dont have free memory to do it therefore I prefer to remove unnecessary indexes if any.

How can I find the memory usage of indexes? (how large a memory each index occupy)?

Is there a reporting tool or queries which can get this information ?

throttle bandwith to API like twitter does

lookikng to limit the number of api requests from clients. wondering if there is a way to do it with apache or do i have to write some code

  • I wouldn't do it in apache.. I'd do it at network layer with iptables.

    iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --set

    iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --update --seconds 86400 --hitcount 100 -j REJECT

    Change 86400 to the number of seconds you want to keep the block for (86400 is 1 day), and 100, is the hit count, how many you're prepared to allow per IP.

    You can also change -j REJECT to -j DROP, which defines the packet behavior when the condition is met. DROP seamlessly drops packets, and REJECT returns a "port unreachable" or similar error.

    That said, there was a mod_throttle that would do something similar, but I can't seem to find much information about it. I think it feels neater to do this kind of thing at the network/kernel level, rather than in apache. Apache is good at serving requests. Let it do what it does best, and don't burden it with having to track connections too.

    (yes, I did just copy my answer to a previous question..)

Querying the Active Directory domain of a Windows 2008 host in SQL

There is code in our shop that must query a SQL Server 2008 server, determine the Active Directory domain that the host belongs to, and, in SQL, create Windows login principals based on this information. Under Windows 2003 server, it was possible to query the domain's name through SQL Server like so:

DECLARE @Domain nvarchar(255) 
EXEC master.dbo.xp_regread 'HKEY_LOCAL_MACHINE', 'SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon', N'CachePrimaryDomain',@Domain OUTPUT 

SELECT @Domain AS Domain

However, this no longer works in Windows 2008 ('CachePrimaryDomain' registry key doesn't exist anymore). Anyone know if there is a registry key that reliably reports the Active Directory domain a Windows 2008 server belongs to? Better yet, is there an entirely different way of handling this that makes more sense? Thanks.

  • First be sure the machine is on a domain and not part of a workgroup.

    Then you can find the "Domain" key here:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
    

    You may need to use T-SQL string functions SUBSTRING and CHARINDEX if you are only looking for the left half of the domain before the '.'

    If you are looking for another way to do this without the registry, consider a SQLCLR project or potentially a PowerShell script that uses the Domain.GetComputerDomain() .NET method.

    Eldergriffon : Thanks, this helps a lot. There do appear to be circumstances, however, in which the name of the Active Directory primary domain can be different than the left part of the network domain provided in the key you mentioned (in our corporate network this is the case). Good call on the CLR idea, though, I had forgotten about that.

How to convert (p2v) a mounted physical drive with or for VMWare Workstation or vCenter Converter?

Several posts (like this one) seem to indicate that if you have a physical hard drive, you can "just" connect it with VMWare and it will be converted in a VMWare virtual machine.

I have a physical disk which is bootable (but not booted into!), and is accessible by Windows as drive H:. With all the options that VMWare offers to convert (a live system, a VHD image, etc) it doesn't list a way to simply pick up a physical drive and use it.

How can I convert this physical drive with a working OS to a VMWare image?

note: I also have a VHD backup (larger than 137GB), but not the VMC file, because I chose Full Backup from Vista; VMWare Workstation can only connect if the VHD is accompanied by a VMC file

  • What I've done before is to create the VM and a blank disk but boot from a recovery Linux ISO (eg SystemRescueCd), mount the VMware disk, add the network details then attach the physical drive to a Linux box and use dd over ssh to transfer the entire drive contents to the VM disk.

    Once it's all copied let the VM boot from it.

    For recovering what was a running system this has worked fine. The only problem may be with having the drivers for the type of drive that the VM now has.

    From DerekB
  • If you are able to boot the hard drive on the hardware it used to live on you can P2V it with http://www.vmware.com/products/converter/

    From Bernie

Make services not start automatically after reboot (as they require access to an encrypted partition)

Hi,

I use Ubuntu Server 10.04. I more or less only want the server to be accessible over SSH after a reboot. I will then login and mount the encrypted partition myself, after which I start the services which uses it.

How would I go about setting something like that up?

(My first idea was to have everything except /boot in an encrypted LVM, but I never got logging in through SSH and mounting the LVM to work. Initramfs was a bit too complicated for me. Otherwise I think this would have been the best solution.)

  • Services get started via entries in the various /etc/rc.d/ folders (they are named from rc1.d through to rc6.d). You will need to identify the services you don't want to start, and then change the Sxxservicename links (xx is a number between 01 and 99) to Kxxservicename. Please be aware that these changes may get overwritten when the relevant packages are upgraded. If you want to keep the changes, I would recommend changing the start/stop level definitions in /etc/init.d/servicename (each service has a script in that folder that actually executes the start/stop, and it also contains the default settings for the runlevels where the service should start and stop).

    Then simply change the line for the encrypted partition in /etc/fstab to include the "noauto" option, which will prevent it from getting mounted at boot time. You can still mount it manually using the mount command.

    WARNING: You must make sure that your boot process can complete without any of the data on the encrypted partition. Otherwise you are digging a big hole for yourself (and you'll need a live CD to get out of it).

    SvenW : Nope. Ubuntu changed away from old style init system to something called upstart some releases ago. Still hadn't time to look into this...
    wolfgangsz : Then please explain to me why I have exactly those files and that boot behavior on my Ubuntu 10.04 box? Or, if you DO know more about how this works, provide a better answer to the OP, so we can all learn.
    Gilles : @SvenW, woflgangsz: This answer is not wrong, it's just incomplete. Upstart provides compatibility with system V init scripts, and the distribution handles ships with quite a few. This answer will take care of these, but if any of the upstart services require the encrypted filesystem, you also need to prevent them from starting at boot time.
    From wolfgangsz
  • Your distribution uses upstart to manage services, so you need to take care of both upstart services and “old-style” (system V) services.

    For all upstart services that require the encrypted filesystem, edit the corresponding file in /etc/init.d and change start on foo to start on (foo and encrypted-filesystems) and stop on bar to stop on (bar or runlevel [0126]).

    For all system V services that require the encrypted filesystem, rename the symbolic link /etc/rc2.d/S??foo to /etc/rc2.d/K50foo.

    After you've mounted the encrypted filesystems, run the commands

    initctl emit encrypted-filesystems
    telinit 3
    

    If you want to unmount the encrypted filesystems without rebooting, I think telinit 2 will stop all the affected services with the scheme I've proposed.

    From Gilles
  • I'd advise using update-rc.d (check the man page) to disable startups in runlevel two since it should always "do the right thing".

    It would also be a good idea to put something in runlevel 2 to alert you, eg email, so it doesn't sit there unnoticed after an unexpected reboot.

    Then ssh in, mount the crypt volume and init 3.

    [Double check that it's still OK after package updates]

    From DerekB