Tuesday, January 25, 2011

IP Address Restriction

Hi,

I need some technical help regarding IP address restriction in IIS7. My problem how i can block all IP's except following IP address ex:- 69.59.196.212 in my IIS7. I want give access max 4-5 IP's remaining all Block.

Thank You.

How can ssh allowed to setup remote port forwarding but not execute commands?

How can an SSH command be setup to allow port forwarding but not execute commands.

I know that the ssh login can use -N to stop commands executing, but can the ssh config file be setup to disallow it?

Restricting the type of shell and the path in Linux is on option, but can it be done in the SSH configuration itself?

  • this article should set you in the right path

    http://www.semicomplete.com/articles/ssh-security/

    From eric
  • Look at man sshd and search for AUTHORIZED_KEYS FILE FORMAT

    What you want to do is create a public/private key pair, and put the public key in the ~/.ssh/authorized_keys file as normal. Then edit the authorized_keys file to add the string:

    command="/bin/false",no-agent-forwarding,no-pty,no-usr-rc,no-X11-forwarding,permitopen="127.0.0.1:80"

    It will end up looking kind of like:

    command="/bin/false",no-agent-forwarding,no-pty,no-usr-rc,no-X11-forwarding,permitopen="127.0.0.1:80" ssh-dss AAAAC3...51R==
    

    You would want to change the argument to 'permitopen' and possibly change some of the other settings, but I think that's basically it.

    vfclists : I guess the permitopen sets the local ports that can be forwarded from the users end. Does it affect remote port forwarding? Does it apply only to that key?
    Slartibartfast : The authorized_keys file is on the remote (ssh server) end. It indicates host+port combinations that clients with the authorized key are allowed to connect to via the server. The port that you use on the local (ssh client) side is irrelevant (and probably not communicated to the server), so it is omitted. Yes, it applies only to that key (which is why it is listed on the same line as the public key corresponding to the key that is permitted)

Creating swap files faster

I'm using Amazon EC2 and wish to be able to quickly generate large swapfiles (~10+GB) on instance startup. Unfortunately, I/O speed on my instances (c1.xlarge) is slow enough (20 MB/s) that this operation takes 10+ minutes, which is unacceptable for my usage.

I am aware that the swapfiles must be pre-alocated to use, so that I can't use sparse files.

However, is there some command to allocate blocks w/out spending the large amount of time zero-ing out the blocks? Also, if this command exists, am I correct in assuming that a page in the swapfile is zero'd out before a user process has access to it (mitigating security concerns)?

  • You didn't indicate what method you were trying to avoid.

    Traditionally, you would issue a dd command that would in turn pump out a zero'd file of the appropriate size, then run mkswap, add the entry to /etc/fstab, and then swapon to activate it. I've attached a rather hastily-written example shell script that I'm sure has errors (it's late where I'm at and the fstab entry is far from perfect)

    #!/bin/bash
    # --- allocate 10Gbyte of swap space as 10 separate 1Gbyte files
    # --- that are brought online sequentially during processing
    for swpidx in 01 02 03 04 05 06 07 08 09 10
    do
      dd if=/dev/zero of=/swapfile.$swpidx bs=16738 count=65536
      mkswap /swapfile.$swpidx
      echo "/swapfile.$swpidx    swap    swap    default    0 0" >> /etc/fstab
      swapon -a
    done
    swapon -s
    

    However, it sounds like you are trying to avoid this method. The fastest solution that I could provide would be to use a swap partition, which does not require the zero-out process, and can be brought online in minutes. If your instance is running LVM and you have an existing volume group that you could carve a partition out of, that would work just as well, and the allocation could be completed in just a few minutes.

    I think I should mention that carving out a swap space of this size is a bit unusual, even for a server; and I only say that because most servers have several gigs of RAM attached when dealing with programs/data of that size. Not to pry or anything, but are you really needing that much swap space?

    Another thing you may wish to consider is re-tuning your workload, rather than trying to dynamically allocate swap space. While it's great to have that much "on-demand", as you yourself pointed out, it will quickly become a bottleneck due to the slow I/O throughput on your server instance. By the time you exhaust your memory and you're essentially "living in swap", you'll find that the 20Mbyte/sec transfer rate turns your instance into a 386SX.

"'' is not a valid login or you do not have permission" on sql server installation

I have tried installing SQL Server 2008 multiple times on my machine, and I always receive this error about 3/4 of the way through: '' is not a valid login or you do not have permission.

I use SYSTEM user for install, when i try to use my admin login/password installation manager shows other errors.

  • Did you run as an administrator the installator ?

  • I solved my problem, it occurs when you computer name equals your username.

  • Thanks Alex!

    It saved my day!

    Strange bug / behavior though...

  • I face the same problem.and thank you for helping me to slove the problem.

    From Richard
  • WOW, It really worked! Thank you Alex :)

    From Vahid

Writing to network share failed

I have outlook files stored on a network share and accessed by clients directly.
Outlook is version 2003, clients windows XP and server is 2003. The files are quite big at around 3GB.

One of the common problem that happens is that I get 'delayed write failed' and this happens only on these PST files.

When this happens I have to run scanpst.exe to fix the PST file. I did not find any entries in even logs that I could relate to the issue.

What would you suggest to change to fix the issue or where to look to further diagnose it?

EDIT: No loss on ping and ping times within normal for LAN.

  • Not that this is a great answer but Microsoft definitely doesn't support PST over LAN.

    http://support.microsoft.com/kb/297019

    mfinni : No, that is the right answer.
    From Tsynapse
  • Although it is true that pst files are not supported over a Lan, I understand your pain. I have had delayed write failed errors on my network shares as well, most of the time when transferring big files (about 20 gigs). I suggest you check your disk configuration on the server. There are many leads . Check this http://www.gibni.com/windows-delayed-write-failed-solutions. In my case it was a network card settings (had to update the driver). But I had another instance when the disk I was writing to was faulty and had to be replaced, after which the delayed write failed errors stopped.

    Unreason : Thank you for the pointers - I will follow up on the storage issues, too.
    From redknight

Wordpress: 404 page after logging in

I get a 404 page when loging in to wordpress for all my users. It adds an extra "/" so its like website.com//

I am using:<?php wp_loginout(urlencode($_SERVER['REQUEST_URI'])); ?>

So that after a user logs in, it brings them back to where they were.

Is there something wrong with that code? Any ideas? This might help: http://pastebin.com/28tURS8m Thanks

  • OK. I actually had a few minutes to test this on my Wordpress 2.9.2 blog.

    The issue you are having, as mentioned in my comment, is that you're escaping $_SERVER['REQUEST_URI'] in your paramters of wp_loginout(). That function, wp_loginout(), already has a url cleanser called esc_url().

    So, if you just write...

    <?php wp_loginout($_SERVER['REQUEST_URI']); ?>
    

    ... your code will work as you want it to.

    thegreyspot : James works great. I really appreciate it. I had taken the code from some forum, no idea what it meant. Wish i could give you plus1 :) not enough reputation...
    James : No worries. I'm glad it worked. Escaping strings can always be a tricky endeavor in PHP development. Every function does it different or behaves in a way that puzzles one for hours.
    From James

Checkinstall failed with /root/rpmbuild has no source directory

Hi, I am trying to use checkinstall to build a package from source code. However, when i run checkinstall , it ask :

/root/rpmbuild has no source directory, please write the path to the rpm source directory tree.

i am running on fedora 12 and system was installed through kickstart via repository of dvd of fc12. I was not aware of the rpm source directory during the installation.

so how can i check whether rpm source has been installed or not. if not, how to make the rpm source directory so that i can please the checkinstall and build the package successfully. or can i bypass it?

thanks a lot

  • Hi,

    the solution you seek is:

    (as root) mkdir -p /root/rpmbuild/SOURCES

    From lope

Can XenServer be administered remotely?

I keep reading conflicting reports regarding Xencenters' management capabilities.

Is it true that you have to be on the same subnet to connect to Xenserver via XenCenter?

  • So long as traffic is properly routed, you should be able to manage remotely via XenCenter, in fact, I've done it before. I believe XenCenter uses ports 80 and 443.

  • In my experience, I have had a XenServer hosted at a colo datacenter two counties away, and XenCenter running on a local PC in my office. I have never had any problems using most of the features in XenCenter to administer the Dom0 or DomUs on the XenServer.

    However, I did have an issue with a firewall on the server-side (data center firewall) blocking ports that are used for the Console built-in to XenCenter. You would just have to make sure you have those ports unblocked. I believe they use 6001-600X (X being the total number of DomUs you have, so your fourth virtual machine would have its console available on 6005).

    I do believe there are options in the command line of the XenServer that allow you to configure which hosts are allowed or disallowed from accessing any remote administration but I never enabled or used those features.

    As ChrisSoyars stated above, XenCenter connects through port 80 and port 443 for XenCenter and port 22 for SSH. So, as long as those ports aren't blocked on the server-side firewall, you shouldn't have any problems.

    From James

Installing PHP 5.2.1 with Apache 2.2.x in Windows

Hi guys,

I have installed Apache. Its working fine.

I have also PHP 5.2.1 installed.

I have enabled load module in http.conf also

I have also added following in mime.types

AddType application/x-httpd-php php
AddType application/x-httpd-php-source  phps

But Still I am .php page along with php code. It gets executed but also shows the php code same as written. What else configuration remaining?

Solutions: Add SetHandler in http.conf file

<FilesMatch \.php$>
      SetHandler application/x-httpd-php
</FilesMatch>
  • Have you installed php?

    Or you can try other popular packages for basic web server functionality:

    KoolKabin : Yeah I do Have installed PHP. Its in c:\php folder. I am trying to do it manually... not using xampp or zend server
    KoolKabin : thnx.. addhandler worked.... Actually i found out that i placed my add type code placed in next file.
  • Did you make sure to restart Apache after making the config changes?

    Also clear your browser's cache - this was the problem the last time I had this issue. I went so far as using another browser to make sure.

    KoolKabin : yeah fine its working now... I use addhandler method
  • While you seem to have fixed it (as posted in other comments), I think I see your problem.

    The AddType directive seems to like having a dot in the extension. Your posted configuration is missing them.

    AddType application/x-httpd-php .php
    AddType application/x-httpd-php-source .phps
    
    KoolKabin : Should we add the given code in mime.types or http.conf
    From Charles

oracle rman full database recover

I want to restore a full database backup of a databae in noarchivelog, but not the last backup. How can i select the full backup to restore?

  • If you have a proper backup, you can specify the point in time of the restore as follows:

     RUN
     { 
         SET UNTIL SCN 1000;    
         # Alternatives:
         # SET UNTIL TIME 'Nov 15 2004 09:00:00';
         # SET UNTIL SEQUENCE 9923;  
         RESTORE DATABASE;
         RECOVER DATABASE;
      }
    

    Look into Oracle documentation at http://tahiti.oracle.com for your specific Oracle version for more details.

    IC.

    Gaius : In NOARCHIVELOG mode you can't do a point in time recovery - where are the archive logs to get you there?

Good Hosting Providers With Zend Framework Support

I currently use ixwebhosting for my hosting services. They're cheap and work (most of the time). The databases are horribly slow, the servers are horribly slow, and their support (though usually prompt) is tough to deal with. That being said, they're cheap, I've got like 20 domains hosted in my account, none of them are high volume, and they work JUST good enough- until today. This isn't meant to be a condemnation of ixwh though. Their prices are very low for what they do offer and most things work just fine, most of the time.
I need to be able to host web apps written with Zend Framework in a fairly easy fashion. The server performance can't be worse than what I've already had (a pretty low hurdle to clear), and I don't want to spend $30/mo. These are not money making websites- they're projects. My requirements are PHP 5.3, ZF support, MySQL databases, multiple domains- not much.
Who should I look at, and who should I look out for?

Also- I put this on SO instead of SF because of the Zend Framework specific requirement. If I'm wrong, do as you wish.

  • I'm going to vote this goes to ServerFault or SuperUser, but I am a ZF developer myself who had similar questions originally. I recommend you look into VPS hosting. A service such as Slicehost or Rackspace will give you full root access to a clean linux distro of your choice, so you can literally do whatever you want with your server. Install random software, change webroot, create as many domains as you like, install APC (which will help Zend big-time since there's a lot of files to load). VPS will give you the flexibility of a dedicated server with less of the cost. If you aren't comfortable with unix admin, don't worry; slicehost's articles are amazing. They will walk you through installing anything you like. And if you go with a LTS Ubuntu distro, installing software is a breeze with aptitude. You can run a good number of Zend Framework sites on a 512 slice for $38 month.

  • I know servergrove.com has expertise (techs know the framework) in ZF and there support is supposed to be top-notch. I have every intention of deploying my next project there.

    From Iznogood

CA ArcServe r11.1 - have to switch Tape Drive Offline then Online to finish backup

Ill keep it brief, I have an HP Ultrium 1 in a server currently running CA ArcServe r11.1.

I have 5 daily backup tapes, each of which are new. 3 of the 5 work fine without intervention but 2 of them stop at varying points through the backup asking for a new tape, even though that tape is not full. The way I have found around this is to switch the tape drive offline for 10 minutes then switch it back online, whilst the backup is still running.

Has anyone ever seen this before? If so, any ideas how to permanently fix this.

If all else fails just some pointers in the right direction.

Thanks

  • Wow, that sounds bad, is the tape drive under support? if so I'd get it swapped out, it might not fix the problem but at least you'll know you've done it. If you've still got the same problem I'd suggest a new set of tapes I guess.

    Richard : It is a completely new set of tapes and has only started happening since the new tapes. unfortunately, the tape drive is out of warranty
    Chopper3 : balls, sounds like you might need to mess around with trying to get new tapes - if it's the drive you're in a pickle
    From Chopper3
  • Try running an erase job on the tapes before their next use.

    From joeqwerty
  • I've experienced similar issues with new tapes before. I just sent them back and had them replaced. If a tape stuffs up when it's new it's somewhat unlikely to get better and considering they have a warranty there is no reason for you to persevere with them. While modern tapes have excellent long term reliability, it does require those tapes to be perfect when they are put in service.

How to allow members of a group to change file permissions on linux

I need to allow members of the group 'ftpusers' to be able to change permissions on all objects inside a certain directory. I was looking into how to do it but all I have found is how to do it on BSD:

chmod +a "ftpgroup allow writesecurity" /some/dir

I need exactly the same thing but for Debian/GNU.

Thanks

  • Just give the 'ftpusers' write permissions and ownership on that directory:

    chgrp ftpusers <directory>
    chmod g+rwx <directory>
    

    And then set the GID bit so all new files inherit group ownership:

    chmod g+s <directory>
    
    Drasko : That was my thought exactly but it doesn't work, I get 'Operation not permitted' if I try to change anything
    Vitaliy : What are the current permissions?
    Drasko : drwxrwxr-x 9 drasko ftpgroup 4096 2010-03-25 10:20 dirname
    From Vitaliy
  • One solution (which I've had to use) is a cron job going through and changing the permissions of a specified directory and files under it. Not pretty but it works.

    If you want to extend the ability of users to change this, you might consider allowing the users from the ftpgroup to allow chmod within the specified directory with an appropriate rule using sudo.

    Or you can make a shell script which does the appropriate checks and does the function, and make that program allowed to be run via sudo. I do not suggest nor recommend a set-uid shell script.

    From mdpc
  • Only the owner of a file or root is permitted to change permissions in Linux (write access != permission change access)

    The only way I can think of is using sudo. I don't know if that would do the trick, and I'd be exceedingly cautious about how you specify the sudo rules so that the users don't have any additional privileges.

    Note that if they are connecting using an FTP server, sudo probably won't be the answer.

VMware ESXi 2U Server Vs. Traditional Bare Metal 1U Server

Hey All.

I have a Dell PE 2970 quoted and ready to order with the following specs:

  • 2X 6-core AMD 2.2ghz 6mb HT-3
  • 6X 73gb 15K SCSI 6Gbps 2.5in (RAID 10)
  • 16GB (4X4gb) 800mhz RAM
  • 2X Intel Pro 1000PT Single Port 1gb NICs

How will the above server perform compared to traditional bare metal server I am running currently? Current server is a 5 year old PE 1425, 2X 160GB SATA, 2X Intel 2.8ghz single-core, 2GB RAM.

Primary load is LAMP web server traffic, not much, max 50GBs per month.

I assume the PE 2970 running ESXi will handle the "load" with absolute ease, but I'm planning on adding development environment VMs for Java/Grails & Ruby on Rails (both CentOS 5.5) and Windows 2003 Server, which will all be routed via one of the NICs (primary NIC will be dedicated to CentOS 5.5 LAMP VM).

Note sure where the bottleneck will be performance-wise in the PE 2970, but trying to account for issues now before I take the plunge. I'm buying a Cisco ASA 5505 for the firewall as well.

Suggestions appreciated!

  • The new hardware should perform better than the old stuff. Your disk I/O should be markedly better since you'll have more platters involved, and they'll be rotating faster. The environment you describe doesn't look terribly disk I/O bound as it is, so you're more likely to be constrained by memory. At first the environment should be markedly faster than the old one. As you grow, you may end up with some CPU problems as runaway dev-processes chew resources, but that can be limited by not assigning every VM its max VCPUs. Looks solid to me.

    MVC You Know Me : Right, only concern is with multiple VMs competing for disk resources to the logical disk (i.e. 3X 15K SCSI from RAID10). The development environment VMs will be hosting live sites within 6 months, so have to plan for minimum 3 production VMs. My first steps into VM world, so trying to plan out in advance -- huge amount to learn, bare metal setups are cake in comparison...
  • The processors give you about 6 times the CPU grunt of the older Intel, RAM has (I think) about 2-3 times the bandwidth of the older Intel, Hard disk subsystem is about 9x the IOPS 6x the throughput of the older system. No real surprise there, I'm sure you'd figured that out having selected the components.

    In typical environments I'd be looking to consolidate 6-8 "average" servers similar to the baremetal one you describe onto one of these when virtualizing. The 2970 is a pretty good mid range server and the 6 Core AMD Instanbul Opteron CPUs are quite good at virtualization as they support NPT/RVI.

    However I'd call out a few things to consider.

    1. Dell are about to release the R715 (the reference is buried in the middle of this press release) - this is their 11G successor to the 2970 which is based on their 9G platform. The R715 supports the Magny Cours 6100 series Opterons that are substantially better than the Istanbul CPU's and have a whole lot more cores (8 or 12 depending on the model) and memory bandwidth. The other platform improvements in the 11G line make it quite appealing but as with everything shiny and new it will cost more.
    2. As you consolidate more stuff into a single box you increase the impact of any outages or downtime. Every extra server VM you add makes it harder to shut the damn thing down for maintenance. That's fine so long as you keep an eye on it but dealing with that is one of the reasons people pay so much to have shared storage and VMotion capable licensed VMware setups.
    MVC You Know Me : Helvick, good points, particularly re: the next generation 2970, R715 PE 2970 has been on the market for at least 3 years, so I'll be buying a dated server, but nothing on Dell site compares 2U-wise. Some people are doing iSCSI with teamed NICs connected to a SAN, but have to imagine that would be very pricey given my total budget of $5K, which is just about blown (Cisco ASA 5505 + Smart support is @$500 and VMware essentials license is $500), have a grand left to play with, and that may go to a supermicro 1U for a backup server. Have to so something production server is 5 years old!
    From Helvick
  • My rule of thumb is that you can consolidate at about 2 real machines to 1 core that's virtualized as long as the services you are migrating aren't super CPU bound. With a 12 core host you can probably put 24 VMs on it comfortably as long as you have the RAM and IO. You're not going to be able to do better than a 6 disk RAID 10 in that chassis so that looks good. If you are allocating 8GB of disk per VM for a default install then that would be about 192GB total which should be pretty close to how much space you actually have.

    If you allocate 1G of RAM to each VM you can easily run out of 16G of RAM. If you can afford it and if you think that you will end up creating enough VMs to need it, then try and get 32G instead. I think that will be better sized to the capabilities of the host so that there is no particular bottleneck, you'll run out of disk, cpu and ram all at the same time. Better to get it now than try and put it in later, you may not want the risk and downtime of fiddling with the hardware or you may not even be able to get RAM for that kind of host at a reasonable price down the road.

    Of course if you just don't need it then what you've built should otherwise be fine.

    MVC You Know Me : Yes, 16GB RAM will suffice for now, and idea is to bump up to 32/64GB as needed. re: disks, these are 2.5" SCSIs, so I can actually get 2 more in there if the need arises (4X 15K SCSI would be nice...) Disk space of course is a bit of a red flag, I currently use @30GB, and will only have @210GB available, so if capacity increases that may become an issue down the road. Anyway, can only do so much with $5K budget.
    mtinberg : Just curious, does you app have a lot of data that causes your hosts to take up 30G or is much of that 30G empty space? With a VM it's usually possible to add more disk images and extend the filesystem live so there isn't as much benefit in pre-allocating a large disk device for your VMs, just add disk where you need the capacity as you go.
    From mtinberg

Guest OS Support for VMWare ESXi 4 on an IBM xSeries 366 Server

I was able to install Windows Server 2008 R2 without a problem (which requires 64 bit) on an IBM xSeries 366 server. But I also found out the x366 doesn't support HAV so adding the Hyper-V role is out.

I saw that the x366 is in the HCL for VMWare's ESXi and it installed/configured without a glitch.

I tried to export a Win 2008 R2 virtual from workstation into ESXi which was going fine right up until the step where it starts moving the virtual disk to the server. At that point i was greeted with an error that it couldn't migrate x64 to x32.

I tried to create a new virtual machine in ESXi and selected Windows Server 2008 R2. When it booted from the iso I was greeted with an error that x64 can't install on x32 hardware.

So, I'm trying to figure out if I've done something wrong or if x64 guest support just simply isn't available in that box? I've tried looking everywhere that I could find but am coming up empty handed for my specific instance: Will Win 2008R2 run as a VMWare ESXi guest on an IBM xSeries 366?

Any thoughts?

  • Make sure Intel VT or AMD Virtualization features are enabled in the BIOS.

    From BillMorton
  • What you are seeing is a restriction that applies to ESX4 and any CPU that does not support hardware virtualization - you cannot virtualize 64bit Guests on Intel CPU's without enabling hardware virtualization.

    The x366 supports Hardware Virtualization (VTx in Intel terms) and if they have the Dual Core Paxville MP Xeons (the 70xx series Xeons) then it can be enabled. If you have those CPU's you can get this to work but if your 366 has the older single core Potomac Xeon's you will not be able to do this because they do not support VTx.

    From Helvick
  • On older models of IBM X366, Xeon were high frequency but single core. Following models used dual core processors, VT-capable.

    You can refer to http://www-03.ibm.com/systems/xbc/cog/Withdrawn/x366/x366aag.html to check with you model reference.

    IBM x366 ref. 8863-3RY and 4RY should be VT capable.

    From petrus

Trouble installing php memcache extension

I'm trying to install memcache on MAMP but I get the warning below, and when I continue it seems to complete properly. I add the line extension=memcache.so to the php.ini and restart MAMP but phpinfo() doesn't list the memcache extension.

$ ./pecl install memcache
downloading memcache-2.2.5.tgz ...
Starting to download memcache-2.2.5.tgz (35,981 bytes)
..........done: 35,981 bytes
11 source files, building
WARNING: php_bin /Applications/MAMP/bin/php5/bin/php appears to have a suffix 5/bin/php, but config variable php_suffix does not match
running: phpize
Configuring for:
PHP Api Version:         20041225
Zend Module Api No:      20060613
Zend Extension Api No:   220060519
Enable memcache session handler support? [yes] : yes

...

Build process completed successfully
Installing '/Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/memcache.so'
install ok: channel://pecl.php.net/memcache-2.2.5
configuration option "php_ini" is not set to php.ini location
You should add "extension=memcache.so" to php.ini
  • hey .. i am running into same problem. did you find anything on this ??

  • Not really an answer. I just can't comment so I'm posting here. I have apache + php installed via darwin ports. apache from ports is not enabled. I have installed MAMP version 1.9 and I'm trying to build the memcache extension into PHP 5.3.2.

    I'm following the option 2 instructions for building the memcache PHP extension for MAMP at http://www.lullabot.com/articles/setup-memcached-mamp-sandbox-environment.

    When I go to build memcached from

    /Applications/MAMP/bin/php5.3/bin/pecl

    I get output that seems to indicate that pecl is building for the PHP extension directory previously installed via darwin ports:

    WARNING: php_bin /Applications/MAMP/bin/php5/bin/php appears to have a suffix 5/bin/php, but config variable php_suffix does not match
    building in /var/tmp/pear-build-bmillett/memcache-2.2.5
    running: /private/tmp/pear/temp/memcache/configure --enable-memcache-session=yes
    ...
    checking for PHP prefix... /usr
    checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib
    checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20060613
    

    eventually I end up with:

    installing '/usr/lib/php/extensions/no-debug-non-zts-20060613/memcache.so' ERROR: failed to write /usr/lib/php/extensions/no-debug-non-zts-20060613/memcache.so (copy(/usr/lib/php/extensions/no-debug-non-zts-20060613/memcache.so): failed to open stream: Permission denied)

    Just for fun, I ran pecl via sudo and grabbed the extension module that was built (for the wrong source it would seem.) When I stuck it in my MAMP extension folder and configured it in php.ini, I get this error:

    [18-Jun-2010 15:38:00] PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20060613 PHP compiled with module API=20090626 These options need to match

    Is there anyway to specify from which config and to where to build this module? Any enlightenment would be great.

    Thanks

    From Bretticus
  • Okay,

    Here's how I got this to install via pecl:

    I had installed PHP via darwin ports. When php-config was called, it was calling that file for my PHP 5.2.x installation.

    1. I added a directory to my MAMP installation: mkdir /Applications/MAMP/bin/php5
    2. Depending on which version of PHP I'm running, I link the bin folder. In this instance of building for version 5.3.2, from within the new php5 dir: ln -s ../php5.3/bin
    3. Now, update or modify path from your ~/.profile file: export PATH=/Applications/MAMP/Library/bin/:/Applications/MAMP/bin/php5/bin/:$PATH
    4. MAMP doesn't set the execute bit on php-config for some reason: so cd to the php5/bin folder: chmod +x php-config
    5. Now just run pecl: sudo ./pecl i memcache
    From Bretticus

Query schtasks using Powershell

On a Windows7 machine I am trying I can run a query to view all the scheduled tasks using schtasks.exe

This is fine but I would also like to filter the result set using something like

schtasks /query | where { $_.TaskName -eq "myTask" } 

The problem is I don't this schtasks returns a properly formatted list for the where function to work.

I've also tried:

schtasks /query /FO LIST
schtasks /query | format-list | where ....

those don't work either.

What would be the best way to query the schtasks on a local computer using Win7 and be able to filter them

  • Here's a blog post I wrote about doing this. Essentially, I took the output of the /FO LIST /V, wrote that to a file, and imported it back in as objects using import-csv

    Link

    Joey : You're on the right track but writing to a temporary file is unnecessary here: `schtasks /query /fo csv /v|convertfrom-csv` works just fine
    jdiaz : this is neat but still not easily queryable
    Mike Shepard : Johannes: you're right, but I really (really) dislike properties that have embedded spaces/colons/slashes. jdiaz: What's not queryable? The script I posted and Johannes' revision both return native powershell objects with properties. They should be just as queryable as any other powershell entities.
  • if you don't need to do it in powershell then the following will work

    schtasks /query | findstr /i "mytask"

    ps version
    schtasks /query | ?{$_ -like 'mytask'}

    From tony roth
  • You could try to use schtasks, which will leave you parsing text. This is almost always error prone, and definitely more difficult than it is to take the output of a command.

    There happens to be a TaskScheduler module in the PowerShellPack. Once you install the PowerShell pack, to get all of the scheduled tasks, use:

    Import-Module TaskScheduler
    Get-ScheduledTask -Recurse
    

    Since these are real objects, to find a task of a particular name, you can use:

    Get-ScheduledTask -Recurse |  Where-Object { $_.Name -like "*Task*"}
    

    In general, you will find that the PowerShell community has taken a lot of harder to use command lines, like schtasks, and turned them into easy-to-use cmdlets, like Get-ScheduledTask.

    See Also:

    Sending Automated Emails using the TaskScheduler Module

    Hope this helps

    Mike Shepard : This works great if you're using Win2k8 boxes (or vista/W7). Unfortunately, it doesn't work with W2k3 servers (which are still very common in my environment).
  • You are overthinking it.

    Commandline for what you want schtasks /query /s %computername%|FIND /I "%name_of_task%"

    example schtasks /query /s server01|FIND /I "schedule"

iptables traffic redirection for multiple public ips

Hi,
On my linux machine I have:
- one physical interface eth0 with the public ip x.x.x.x
- one logical interface eth0:0 with the public ip t.t.t.t
- BIND DNS listening to t.t.t.t

If I ping t.t.t.t from any other place, it responds back, so that's good.
What I'm trying to do is set up BIND to use the t.t.t.t ip for zone exchange, the only bad thing is that traffic returning from the master server is going back to x.x.x.x.
I have tried some SNAT but I didn't quite hit the spot, traffic did match my rule but the master BIND would still reply to x.x.x.x.
Any ideas?
Thanks

  • BIND has a transfer-source option that controls which local address is used to fetch zones. Add it to the options section of named.conf:

    options {
    
        // ...
    
        transfer-source t.t.t.t;
    };
    

    With this option set, BIND will send out transfer request messages from t.t.t.t. Responses from the master will then be sent back to t.t.t.t. You will also need to configure the master to accept zone transfers from t.t.t.t (if you've not done so already).

    You might also like to set the query-source and notify-source options to control which local address is used for making queries and sending notify messages respectively:

    query-source address t.t.t.t;
    notify-source t.t.t.t;
    

    Further documentation for these options can be found in the BIND Administrator Reference Manual, available from the BIND documentation page.

    w00t : Thank you for the help! It worked.
    From Phil Ross

How do I install a yum package group with puppet?

Does puppet have a way to install a yum package group (e.g. 'Development Tools'), besides exec?

  • I couldn't find anything in the Puppet Type Reference for the Package type, so I asked on the Puppet IRC channel on Freenode (#puppet, oddly) and got nothing so I think the answer is "not yet".

    From Andrew H
  • You could handle this through an Puppet Exec Type to execute the necessary group install. I would be sure to include a good onlyif or unless option so that it only executes it when needed or set to refreshonly and trigger it via a Notify so that it is not run every time though. The Exec type will execute the command locally on the puppet client for you provided it is triggered.

oracle 11g driver to be configured to weblogic9.2

Friends,

I need oracle 11g driver to be configured to weblogic9.2...

Surfing for this found we need to add ojdbc5_g.jar; but no luck.

Can u advise here if u have any idea?

Thanks in advance.

  • Where did you "add" the Oracle Drivers? JDBC drivers should be installed to WL_HOME\server\lib

    If there is already an ojdbc*.jar, you should move it to a backup place. You should only have one version of drivers in the lib folder. Otherwise, it will just take the first and ignore all the others of the same vendor driver. Oracle comes shipped with some JDBC drivers pre-installed.

    Read this

    : Haller, Thanks for your response. We can enable oracle 11g driver in our wls9.2 server through below steps: - Add ojdbc6.jar file to lib directory of weblogic like: “$bea\weblogic92\server\lib”. - Need to comment out the existing driver specification for Database="Oracle" and Vendor="Oracle". - Add new specification for Database="Oracle", Vendor="Oracle" and Type="Thin".
    From mhaller

Wifi using Overlapping - Channel 3 and Channel 5

I see a wireless network (802.11b) set to channel 3 and another on channel 5 - both with similar, but not the same SSID (SLR_UNIQUE-PORTION-HERE), near my house. From the MAC, I see they are using Ubiquiti (http://www.ubnt.com/) equipment - read as commercial deployment.

Assuming they know what they were doing, what reasons would they have to use overlapping channels, and not put their network on channels 1, 6 or 11?

As well, I see both are using different SSID's and being on different channels, are they using these in tandem or with the same client side equipment, in other words, why didn't they use the same channel and SSID?

What issues will I have being on channel 1, or 6 with them being on 3 & 5? Will my reception be any worse than if they were also on 1 or 6?

Thanks, Dan

  • Assuming they know what they were doing, what reasons would they have to use overlapping channels, and not put their network on channels 1, 6 or 11?

    If they did a proper deployment they did a site survey and those channels gave them the best reception in the areas they put them in. But really without asking them ... it's all just conjecture and guesswork.

    As well, I see both are using different SSID's and being on different channels, are they using these in tandem or with the same client side equipment, in other words, why didn't they use the same channel and SSID?

    There could be any number of reasons for this ... my best guess - network segmentation. But really without asking them ... it's all just conjecture and guesswork.

    What issues will I have being on channel 1, or 6 with them being on 3 & 5? Will my reception be any worse than if they were also on 1 or 6?

    Unless they are right next door to you ... you shouldn't have any issues. Their signal will in all probability not be strong enough to interfere with yours at that point in such a way as to make a noticeable impact on your reception.

    From Zypher

Squid access.log export to excel

Is there a way to export the contents of access.log to excel spreadsheet so as to manipulate the data from there?

  • Easy enough with Perl, which has modules for working with Excel, but wouldn't a database be a more appropriate tool? Excel really seems like the wrong tool for this job. Sort of like using a lawnmower to pick flowers. Sure it will work but there are better ways.

    Farseeker : Best. Analogy. Ever.
  • Yeah, Excel can read CSV files, so your best bet would be to alter the log format such that it's comma separated (or perhaps some other delineation token). You can then open your CSV-formatted Squid log using Excel.

    See: http://wiki.squid-cache.org/Features/LogFormat

    Alternatively, if you don't care to alter the format, you can toss a script together to reformat to include the fields you need.

    From McJeff
  • The squid native log format is :

    time elapsed remotehost code/status bytes method URL rfc931 peerstatus/peerhost type

    In the fact, you can develop a simple parser using awk or maybe with perl or python, ... and delimit the data with a delimiter of your choice to get a CSV file. Somethink like this:

     awk '{ print $1","$2","$3","$4","$5","$6","$7","$8","$9","10 }' /var/log/squid/access.log
    
    For more information, please find here a small squid log viewer using python. http://github.com/mezgani/sqview

    From mezgani