Friday, January 14, 2011

Can I install ClearCase 7.1 With non-root

I would like to install clearcase 7.1 on AIX. but i'm not system admin of this server i can get root password for install only. i worry if after install i cannot configure or manage Clearcase by another user. so i would like to install by non-root user.

  • As far as I know, on Unix or Linux, the installation of ClearCase requires root privileges.

    See this SO answer for an example of detailed installation (with a lot of links to IBM documentation).

    Generally, for this kind of administration operations, you should be able to have a "sudo root" right in order for you to install then manage ClearCase, while having every command you type recorded.

    That would be:

    • sudo root for all commands onlt for installation
    • sudo root for a handful of ClearCase commands for administration privileges.
    From VonC
  • I have some experience with ClearCase on Solaris, Linux and Windows hosts.

    On UNIX, the ClearCase installation needs to install kernel modules for MVFS file system support, and init scripts for startup at boot among other things.

    ClearCase itself needs to start as root to load the drivers, and mount VOB file systems etc.

    I can not see an easy way to get around this. You might be able to install and run the eclipse-based ClearCase Remote Client (CCRC) without root access, I'm not sure.

    From sajb

Drop-in install Linux distro?

So, I have a remote computer with an SSH service installed, fully accessible to me. I have no physical access to the machine until tomorrow, but I don't want to wait.

I need to make an image of the harddrive, but the problem is, I am booted into the partition I am trying to image. So, the only option I can think of is to create a small partition, reboot, extract some small Linux disto to that partition, change Grub's settings to auto-boot into the partition, chroot and change the root password for the install, reboot, pray, ssh back into the box (now running the drop-in install), image the main partition, change the Grub default partition, and reboot.

Anyone know of a distro or another solution to getting the image?

  • You can do this with Debian. I believe it supports what is called a chroot install. Beyond that, I can't help as I've only heard about it from a friend.

    koenigdmj : Look up *debootstrap* if you want to go this route.
    From staticsan
  • dd command might help, a small distro would be tinycore

  • There a few ways to do this:

    1. Copy the contents of a small distro like puppy, tinycore, knopix, etc. to another partition. Setup the ssh daemon in the small distro to start networking and sshd at boot, change the grub config to start the new distro after a reboot, the reboot. Then you can make the image in any of the ways below.

    2. Use cpio, tar, or rsync to make a copy of the filesystem. It doesn't give you an image but a copy of the contents. It may be all you need but if not see the other options.

    3. Use dump to make the image. If there is a fs dump for the filesystem you are using then dump gives you an image of the filesystem that can be restored to any partition big enough to hold all the data. There are dump utilities for xfs and ext2. You can use the ext2 dump for ext3 filesystems as well. To make a complete image you will run something like dump -0f - /mountpoint/or/dev_entry > image_file

    4. Use dd to make an image of the running drive, If you use dd you may need to run fsck against the image before using it. I've used dd method against lots of different Unix variants. The big drawback is that it copies every block even if there is no data. You may want to experiment with block sizes before making the copy but 16M is usually a reasonable starting point. dd if=/dev/sda1 bs=$((16 * 1024 * 1024))> image_file is an example.

    If you have a large enough partition available you can make the copy locally. Otherwise you will need to copy it across the network which may make this very time consuming. Compression can help reduce the network time if bandwidth is a bottleneck. Piping the output through gzip to ssh can be done by replacing the "> image_file" with "| gzip | ssh user@remote_host '> /path/to/file.gz'"

  • If you are on a debian system you can use debootstrap to create a minimal debian install in an extra partition. Everything else is already in your question, although I would add: "Install ssh server and make sure it is started on bootup" after the "chroot" part.

    Also make sure that networking will start in the minimal install.

    From 0x89

How to find out what caused Vista shutdown?

When I got back from work today I found that my Vista64 in a bad state.. computer was on, but screen was black and computer was not responsive (pressing numlock did not cause light to come on).

It looks like a bad shutdown (that I did not initiate). I'm trying to figure out what caused the shutdown, but the only clue I got from the eventlog is

"The previous system shutdown at 5:51:28 PM on 8/3/2009 was unexpected."

Does anyone know of any other clues I can look for? Again, this is Vista64. Thanks!

  • If you type in the start menu "problems" and select Problem Reports and Solutions, Vista will report the problem to Microsoft which may be able to narrow it down to a subsystem. It's probably time to make sure all your drivers are up-to-date, too.

    From Knox
  • Most of the time when you see this someone has pressed the big red button. The message in the log indicates that there was no shutdown command given, so the event log service never got notified. If there were no other errors in the event log this is probably a hardware problem.

    Giovanni Galbo : I suppose overheating might fit the "big red button" bill... the room did seem pretty hot when I got in.
    From Jim B
  • If you have Automatic Updates turned on, check the bottom of your C:\Windows\WindowsUpdate.log for any error messages.

What is the best way to upgrade many linux desktop workstations at once?

At work, we have several Ubuntu Linux workstations. I'm looking for a good/reliable/fast way to install a set of packages on all boxes at once. What I'm thinking of doing right now is:

  1. Install Ubuntu on a brand new box and use that as a master disk image.
  2. Clone or copy the partition contents to all boxes.
  3. When a package/set of packages need updating, apply the changes on the master disk image.
  4. Dump the master disk image to a central NFS server.
  5. Use PXE/diskless booting to put all workstations in a recovery mode.
  6. Clone the master disk image to all workstations once a week.
  7. Use a configuration management tool (what should I use?) to set up /etc and friends.

Has anyone else done something similar? How did you approach it?

I'm already using NFS/NIS, so I won't lose any user data.

  • You can ghost it over to them all, but most linux distributions include a method to script the installation process. The advantage is it will ask you if a problem occurs, whereas just ghosting it over doesnt.

    With Fedora (amongst others) you can script it such that all workstations login to a central source of control/alert so that they will conduct package installation autonomously but ask when differences and problems occur.

    As an alternative, if you have mixed machine types, group their MAC addresses into hardware setup groups and use a live CD to rsync and install grub, dependent on MAC/Hardware setup.

    Tons of approaches really. Tutorials on headless installs will provide some nifty ideas with or without screens.

    Scripting the Fedora/Anaconda install process

    Chad Huneycutt : The google term for Ubuntu automated installs is "preseed files", and I definitely agree that is the way to go. Imaging is so hard to maintain, and with the rate of security updates, it is important to be able to quickly update all your machines. Automated/unattended installation + automated configuration management (i.e., puppet) + managed package repos if you really need them.
    From Aiden Bell
  • I think puppet can help you with this. You're essentially managing a group of workstations instead of servers, but it should work the same way. That way you can create different groups based on any hardware differences,etc.

    David Pashley : +1 for puppet. You could use cfengine or chef too.
    : +1 puppet is the business
    From Brent
  • Another option you could consider is using the existing processes like "yum update" to pull packages from a repository that you run and force those workstations to all update at a specific time. All you need to do is update a master workstation, note the packages you need to distribute, and put them into your internal repository.

    setatakahashi : You need to make a repository inside your network for faster access and run yum update -y. For apt-get to work he will have to mirror an Ubuntu repository.
  • ssh in to each box and run "apt-get install whatever".

    consider having all the clients trust your public ssh key from your administration host so you do not need to provide password to do so.

  • If you're using a debian-based distro like Ubuntu, look into pre-seeding:

    DebianInstaller Pre-seed

    This allows you to specify just about everything you setup during install, like network, apt, packages, etc. When you boot the install cd, append:

    preseed/url=http://blah/preseed.cfg
    

    to the kernel boot menu, and it'll use your preseed config to do the install.

    David Pashley : This will only help you with the initial install. The question is about upgrading packages on an existing installation.
    From Bob
  • I used to be an administrator in my Electrical Engineering computer lab, counting about 20 computers, all Ubuntu. I liked to always use the latest Ubuntu release as soon as it was released, so I upgraded a lot.

    The setup of the computer lab was such that I had one Debian master-box (seldom updated/upgraded), which I used for hosting the student branch's web page, managing the user accounts (with LDAP, allowing the each user to sit at any computer and log in with his/her home folder available), running maintenance scripts, etc.

    The method I used to update (which is somewhat crude in my opinion tbh) involved writing a CD with the latest release, when released, and manually placing them in the drives, rebooting and going through the regular installation procedure. When the install was complete I copied the public RSA key I had generated (once) to the host (into the /root/.ssh/ folder), and therefore giving the Debian box control over the host box. So on the master box I had a python script (it can of course be any scripting language) which brought the host computer up-to-speed with my wanted configuration, copying config files to the host box (such as the LDAP config files, pre-built gnome config files, etc.), apt-get the required packages (a lengthy process), configuring them (by copying their config files and menu-item files to the correct places) and otherwise setting the host box up.

    This process, although crude and unsophisticated, only required my presence for actually booting the "to-be-updated" box up from the ubuntu setup CD and going through the few setup screens for ubuntu, configuring the /etc/network/interfaces file for access to the network, and then running the script on the master box, then I could be off doing something else.

    If you want more info please post more specifically what you want to automate, whether it be just the actual process of setting up a new release version of a currently running linux distro or setting up programs that require building source files or such ('cause I used to also build my own packages for programs such as Eclipse (which doesn't play nice with Ubuntu straight from the package manager), XCircuit (which is "buggy" at best from the package repo, Matlab (which requires punching in a cd-key and more)...

    Hope that helps! =)

    From aright
  • I would suggest creating an APT repository. You can add your own packages to the repository, and use a cronjob to update the packages using apt-get once a week. The apt-get job can be made automatic, and since the repository is your own, you can update it or not as you desire.

    All you would need to do is set up the repository and configure APT on all the machines to use it. I would recommend cfengine to configure all of the systems; thus you don't have to visit each one to update APT on each one.

    You could even create a package with the repository configuration built right into it; I would recommend it in fact. Then when you build a new environment your local APT configuration is just an apt-get away.

    From David
  • I find it hard to believe that no one has mentioned Redhat Spacewalk.

    It's the free open source equivalent to the Redhat Network's Satellite system. It allows you to manage your entire infrastructure of CentOS, Fedora, or Scientific Linux installations. It is essentially meant for what you're wanting to do.

    Of course, you're using Ubuntu, as opposed to Redhat derived distros. Fortunately, the Ubuntu world has what you're looking for in Landscape. It comes free with a support contract, or it's $150/node. Expensive, but it's a trade off.

    If you don't go with Landscape (or migrate to RH for spacewalk), then Puppet/CFengine might be your best bet.

Tools to manage large network of heterogeneous web applications?

I recently started a new job where I've been tasked with managing a global network of heterogenous web applications. There's very little documentation. My first order of business is to create an inventory of all of the web applications. Are there any tools out there to manage a large group of web apps?

I'd like to collect a large dataset for each website including:

  • logins for web based control panels
  • logins to FTP/ssh accounts
  • Google analytics tracking code for each site
  • 3rd party libraries used
  • SSL certs, issuers, and expiration dates
  • etc

I know I could keep the information in Excel or build a custom database, but I'm hoping there's already a tool out there to help me with this.

  • I would look to your monitoring and change management/trouble ticket solutions for implicit documentation of the environment. Some solutions, like Zabbix or OpenNMS, may even help you out by auto discovering the network.

    This is ultimately a question about maintaining documentation, which is unfortunately a political and not a technological problem.

    Andrew : Matt, You just hit the nail on the head. I'm realizing no matter how great the technology is this is ultimately about me helping people to realize how important documentation is.
    From Matt
  • Might not be quite what you are after, but, You could checkout Nagios which includes modules for monitoring/collecting data on all kinds of things and you can also script for it in various languages.

    If the sites are live, then you could write a script to dump the information you want and also to update your data if information is always being added to the site.

    good luck

    From Aiden Bell
  • Sounds like you bring up a threefold question. This is what I think you're looking for:

    • A Password Management System
      • KeePass might be a pretty good solution for you. You can save your passwords in one secured database locked with one master key (or key file). So you only have to remember one single master password or select the key file to unlock the whole database.
    • A Version Control System

      • You could use Subversion which manages files and directories, and the changes made to them (You can keep your keys, certs, notes, config files. You could even keep copies of your password database/s created with Keepass). This allows you to recover older versions of your data or examine the history of how your data changed. If you're a windows user, try VisualSVN for the server and TortoiseSVN for the client.
    • A Monitoring System (Not sure if you're really looking for that, but..)

      • To monitor our servers, routers... pretty much any kind of host we care about, We use ZennOS and are pretty happy with it. Here's a list of Monitoring solutions though. You can search for any of those within here, google, or bing them: ZennOS, OpenNMS, Nagios

    All of the these solutions I believe are open-source

    Andrew : Great suggestions. Just started using KeePass and it's great! We also already have Subversion and will be implementing Nagios shortly.
    From l0c0b0x

Move stored procedures from one database to another in SQL Server

I'm using SQL Server 2008, and I would like to copy stored procedures from one database to another. How?

  • Just use the Management Studio to generate a script for the stored procedures, save the script to a file then run it on the other SQL Server.

    From memory you right click the database and under All Tasks is Generate Scripts or something like that. This will produce the Transact-SQL to create whatever you select.

    JR

    jhayes : is there an import/export wizard in 2008? 2k had the option to copy objects.
  • Right click on the SP under the DB and click Script Stored Procedure As > CREATE To > File, it will create an SQL script file, then run that script on the other database.

    Joel Coel : Just be careful, because this script often has a _USE [Database]_ command at the top. If the new database is named something different, you'll want to update that as well.
    Dan : That is true, thanks.
    mrdenny : +1 for step by step instructions.
    From Dan
  • Here is a query (set output to text) to return the stored procedures :

    SELECT ROUTINE_DEFINITION
    FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='PROCEDURE'
    
    mrdenny : This doesn't always work, depending on text wrap settings, and the width of the code. It can under certain setups give you funky CRLFs in the output code, as can sp_helptext.
  • There are options in SSMS to make it generate permissions for users as well - that's handy.

    Tools>options>scripting

    You can also script multiple by clicking 'stored procedures' in your Object Explorer pane and then multi selecting procedures in the Object Explorer Details window. Right click the selected procedures and you're ready to go.

    I prefer redgate's SQL Compare.

    From Sam
  • The answers above are all good and will work. The issue is (in my world, anyway): Where are your sprocs?

    In my case, we have one kit of sprocs in the app db (business logic, etc), and another set of system management sprocs in master.

    The kicker for me is having to move (and keep in sync) the sprocs in master ....

    From samsmith

MOSS Stretched Farm over a WAN link - a bad idea?

I am looking to build a disaster recovery strategy for our MOSS farm. The stretched farm configuration looks appealing, but the MS documentation warns not to use it when there is more than 1ms latency between data centers.

Our WAN link has an average 6ms latency between our data centers. Has anybody ever built a stretched farm over a WAN link? Are Microsoft's warnings sincere or over protective?

Reference: http://technet.microsoft.com/en-us/library/cc748824.aspx

  • They are sincere, and you put yourself into an unsupported environment by doing so (CSS will not be able to help you when you run into weirdness between the farms).

    If disaster recovery is your primary concern, your best option would be to configure disaster recovery across SharePoint farms by using SQL Server log shipping. If you are more interested in providing the ability for users connected to different datacenters to use a single (logical) SharePoint implementation, then you would want to look into globally deploying multiple farms.

    Ryan Michela : Good call on becoming unsupported. Don't want that.
    From Sean Earp
  • You are better off going with the sql server log shipping route with a secondary farm. Your probably already familiar with the link but here is the SharePoint availability guide

    I'm not a big fan of the multiple farm idea for disaster recovery, I greatly prefer the log shipping option.

    From Jim B

IIS 7 on SBS 2008 - logging is going haywire

The C: drive on our SBS 2008 server just ran out of space. The culprit appears to be IIS, which is logging an incredible amount of activity in one particular log folder. Most of the IIS log folders look normal, but each of the daily files in C:\inetpub\logs\LogFiles\W3SVC1372222313 is at least 4.4MB, with the largest being yesterday's, at 1.67 GB!

The largest ones I can't even open on the server, but I've examined several of the smaller ones. They all show several dozen entries being made every few minutes that look like this:

2009-07-11 00:00:02 fe80::5558:434c:a610:405a%10 POST /ApiRemoting30/WebService.asmx - 8530 [DOMAINNAME]\[SERVERNAME$] fe80::5558:434c:a610:405a%10 Mozilla/4.0+(compatible;+MSIE+6.0;+MS+Web+Services+Client+Protocol+2.0.50727.4016) 200 0 0 3

Typically 40 or 50 of these entries will be made in the same second, with gaps of 2-5 minutes between each batch of entries. The other 1 percent of the entries in the file appear to involve WSUS.

I'm going to delete most of these files, because I don't really have a choice, but I'd like to know what's causing this out-of-control logging and how to put a lid on it in the future.

UPDATE: Okay, I've been able to examine a few more files. The bloat is apparently being caused by something going wrong when someone (i.e., me or another admin) logs in to WSUS interactively:

  1. The trouble begins with a single log entry with no username (just "-"). It gets an HTTP status of 401.2 and a sc-win32-status code of 5.

  2. This is followed by a long stretch of entries that alternate between no username and my own username. The ones with no username have an HTTP status of 401.1 and a sc-win32-status of 2148074254. The ones with my username are normal HTTP 200 entries.

So as far as I can tell what appears to be happening is that when I log in to administer WSUS via the SBS console, NTLM authentication is not persisting behind the scenes, causing continual reauthentication attempts throughout the session, transparently to me. Hundreds of these entries are being created every second, adding about 70MB per hour to the log file. I have no idea why this is happening.

  • That's IPv6-based access to WSUS that you're seeing there.

    Temporarily disable logging so that you don't fill the drive again:

    • Jump into IIS Manager
    • Locate the WSUS web site (it'll be the one listening on port 8530)
    • Bring up the Logging properties for the root of the site
    • Click "Disable" in the "Actions" pane.

    That'll stop the logs from building up.

    I can't say that I've seen WSUS-related traffic build up logs that big before. 4.4MB in a day isn't unheard of, but the 1.67GB in a day means that something has gone wrong.

    Yesterday's log file is going to tell you lots about what was occurring. I find it hard to believe that it was all WSUS traffic. I wonder if something else didn't start banging on the server computer. Get that larger log file off of the machine and have a look at it.

    Your log looks like it's in the W3C extended format. The format of that log file appears to be:

    Date, Time, source IP address, HTTP request method, URI stem, probably URI query, server port, username, server IP address, user agent, HTTP result, probably Win32 status, and probably time taken

    (The "probably" fields are because I can't be sure without seeing more of the file.) The header on the file will tell you the format for sure.

    You need to get a look at that 1.67GB file-- it's gonna tell you what's up. Logging disabled on the site will prevent the hard drive from filling up again, but you want to know what's happening, behind the scenes, since it's going to be impacting server performance in some manner. Ultimately, you want to get to the bottom of the cause and then get logging enabled again (so that you have an audit trail if you have to track down strangeness again in the future).

  • Is WSUS working ok? You could try running the WSUS diagnostic tool.

    Microsoft Windows Server Update Services Tools and Utilities

    From JS
  • On one side of the equation, you still want to figure out why you are having so much log traffice being generated (no suggestions there...), but I have found that the log folder is a good candidate for NTFS compression. Those text files compress nicely, and since you rarely open the log files, you will likely not even notice that they are compressed.

    phenry : Good idea. I'll try that.
    From Sean Earp
  • Hi,

    I'm experiencing exactly the same issue. Wishful thinking that WSUS SP2 was going to resolve the problem... anyone any further on getting to the root cause of this? I've had a look around IIS and short of turning off logging I'm not sure what else to try...

    Many thanks,

    Mike

  • Same problem here, any solution?

  • On a related note, you may want to consider turning off logging permanently for the WSUS site. Granted it may be useful for troubleshooting WSUS problems, but you can always turn it on as needed.

    Having said that, I would look into the root cause of all the log entries and address that issue, which will then make my previous sentence a moot point. ;)

    From joeqwerty
  • I had the same problem; 20GB of log files in there for me. I noticed that the user listed was an account that I use for our backup program that has to be logged in all the time so backups will run unattended. I have a suspicion it's because the SBS Console is also usually always up as well. Can anyone corroborate this?

  • I'm experiencing exactely the same problem and just stumbled over it because the system partition was filling up. Since the day I started the server this very error is written into the log files multiple times per second in the manner discribed above. The peak was a file with about 270mb.

    I find it very strange that the source AND destiniation IP addresses are always the same. this leads to the thought that the server has a problem talking to itself. I already found an article addressing this issue but it didn't help in my case: http://verbalprocessor.com/2008/06/03/sccm-and-wsus-on-server-2008/

    did anybody find out anything new to this?

SNMP equivalent for show ip route?

I'm new to SNMP. Is there an equivalent in SNMP to "show ip route" on a Cisco 10K router?

  • RFC1213-MIB has an ipRouteTable tree containing the IP routing table:

    [draytm01@mgt03 ~]$ snmpwalk -v 1 192.168.212.45 .1.3.6.1.2.1.4.21
    RFC1213-MIB::ipRouteDest.0.0.0.0 = IpAddress: 0.0.0.0
    RFC1213-MIB::ipRouteDest.192.168.212.0 = IpAddress: 192.168.212.0
    RFC1213-MIB::ipRouteIfIndex.0.0.0.0 = INTEGER: 4
    RFC1213-MIB::ipRouteIfIndex.192.168.212.0 = INTEGER: 4
    RFC1213-MIB::ipRouteMetric1.0.0.0.0 = INTEGER: 1
    RFC1213-MIB::ipRouteMetric1.192.168.212.0 = INTEGER: 0
    RFC1213-MIB::ipRouteNextHop.0.0.0.0 = IpAddress: 192.168.212.1
    RFC1213-MIB::ipRouteNextHop.192.168.212.0 = IpAddress: 0.0.0.0
    RFC1213-MIB::ipRouteType.0.0.0.0 = INTEGER: indirect(4)
    RFC1213-MIB::ipRouteType.192.168.212.0 = INTEGER: direct(3)
    RFC1213-MIB::ipRouteProto.0.0.0.0 = INTEGER: local(2)
    RFC1213-MIB::ipRouteProto.192.168.212.0 = INTEGER: local(2)
    RFC1213-MIB::ipRouteMask.0.0.0.0 = IpAddress: 0.0.0.0
    RFC1213-MIB::ipRouteMask.192.168.212.0 = IpAddress: 255.255.255.0
    RFC1213-MIB::ipRouteInfo.0.0.0.0 = OID: SNMPv2-SMI::zeroDotZero
    RFC1213-MIB::ipRouteInfo.192.168.212.0 = OID: SNMPv2-SMI::zeroDotZero
    

    This is actually from a Linux box but I'd hope Cisco implemented RFC1213-MIB; I can't remember and I don't have any routers to hand.

    chardin : Thank you. Just what I needed. I also discovered this great Cisco SNMP Object Navigator: http://tools.cisco.com/Support/SNMP/do/BrowseOID.do?local=en
  • While not Cisco specific, you can use: .1.3.6.1.2.1.4.21 which corresponds to .iso.org.dod.internet.mgmt.mib-2.ip.ipRouteTable from the RFC1213.mib (check mibdepot.com for a copy).

    If you want to search for a cisco specific MIB you might try: http://www.mibdepot.com/cgi-bin/vendor_index.cgi?r=cisco

    A good resource for SNMP education is www.wtcs.org/snmp4tpc/

    From RobW

Creating weekly MySQL reports

I have a cron script that runs daily to LOAD DATA INFILE into a MySQL database. I would like to, using PHP as part of a web application, generate a weekly report showing the total number of inserted records from the last 7 days.

I would like to know the best way to go about this. Patching or modifying MySQL is not an option, and changing the way data is inserted is not either. I would prefer to avoid counting the CSV files before inserting, as it would be messy to read this data bac into PHP.

Suggestions?

  • If all you are interested is in the number of rows in specific schemas and tables (and assuming that there are no row deletions), then you can use the MySQL command show table status or from the command line use mysqlshow --status dbname. This will list for each table in the schema some extended information including row counts (or you can use like to select specific tables).

    The same data can be retrieved using simple SQL from the INFORMATION_SCHEMA database, maybe like this:

    SELECT Table_name,SUM(Table_rows) FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_NAME in ('table1','table2');

    You can create a simple cron job that takes this snapshot once a day and then it should be fairly easy to extract that data in a useful way from PHP. Heck, you can even store it in a database table ;-)

    Bill Gray : how would I extract that data from within php however?
    Guss : Well, you might want to create a table to store time-based data on the tables you want to check: `CREATE TABLE table_status (check_time TIMESTAMP, table_name VARCHAR(50), rowcount INT)` then run a cronjob to update it daily using `INSERT INTO table_status_update(table_name,rowcount) SELECT Table_name,sum(Table_rows) FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_NAME in ('mytable');`. Then a simple select could compare each table's rowcount between the last value and the previous week.
    Bill Gray : Is there any way to avoid doing this as a cronjob, and just retrieve the data inserted in the last 7 days? some sort of internal mysql record perhaps?
    Guss : Not that I'm aware of. Theoretically you could parse the binary logs, but I wouldn't recommend it, especially not directly from a web application.
    From Guss
  • Must it be inside the PHP app? Inside your cron script, you could count the numbers of rows to insert, and add them either to a logfile or into a small log table inside the mysql db (unless that is what you mean with modifiying mysql). I am not sure, but I guess the myql function row_count() will work with INFILE data as well, so counting the numbers would be easy.

    And, maybe the simplest method, if the rows are not modified after inserting, you could add a timestamp column to the database, which gets automatically set to the date of the insert and count() up the numbers for a week.

    Bill Gray : yes, it must be from within the php app. I'm not sure of how to do anything from within php however..
    From SvenW
  • Use the PHP mysql_info() function immediately after you've run your LOAD DATA INFILE query. This will return a string like:

    Records: 1 Deleted: 0 Skipped: 0 Warnings: 0

    See php function ref: mysql_info() Related would be the mysql_affected_rows() function.

    Whatever you do, don't depend upon the row count in SHOW TABLE STATUS. It can be wildly incorrect depending upon your storage engine and is really only useful for the query optimizer to make some better judgments on optimizations. This is documented in SHOW TABLE STATUS (with the specific section quoted here)

    Rows

    The number of rows. Some storage engines, such as MyISAM, store the exact count. For other storage engines, such as InnoDB, this value is an approximation, and may vary from the actual value by as much as 40 to 50%. In such cases, use SELECT COUNT(*) to obtain an accurate count.

    The Rows value is NULL for tables in the INFORMATION_SCHEMA database.

Cross forest groups?

I'm trying to move some exchange mailboxes cross-forest... however, I'm reading that I need to add the user doing the mailbox move "Exchange Recipient Administrators" right. However, I'm not able to add a user from another forest, even thought I have a forest trust setup. Am I doing something wrong?

"Exchange Permission requirements: Logon account for the user who is running Move-Mailbox needs to be a granted the "Exchange Recipient Administrators" role for Source and Target Forests and "Exchange Servers" role for both source and target Server. Permissions for legacy Exchange Servers remain the same as they were for Exchange 2003 Migration Wizard."

  • This article explains quite well how to configure cross forest administration for Exchange and should get you what you want. You don't say whether your using Exchange 2007 or 2003, the article is for Exchange 2007 but the principles should be the same for 2003.

    From Sam Cogan

What would you say is the best current Link Balancer solution out there?

Rather than a server load balancer, I may have a need for an actual internet link balancing application that allows me to share incoming and outgoing traffic between 2 or more ISP connections (or even through a single ISP, but with multiple connections from the ISP) that go to moderately high traffic servers that don't yet justify the cost of a full gigabit connection but frequently need more than 100mbit in bursts.

Searching around, I notice a lot more solutions out there than there was only a year ago. But price vs. performance vs. value all seem to be everywhere on the board.

Any recommendations? Feel free to let me know if you need any clarification. I am just looking for some opinions on trusted solutions, and what the cost may be to do it as apposed to going to a gigabit connection.

  • This might be a great appliance to look at? Barracuda Networks Link Balancer

    I have used Barracuda Spam and Web filters with great success and I highly recommend them.

    Zethris : My major reservation with them is their over priced required service plans on top of the hardware investment. Link Balancer investment for my application with them seems to be $8000 minimum with a 250mbit cap.
    From xeon
  • I heard or link load balancer from radware that does exactly same thing. I tested IPS from radware named DefensePro once and it was good. I attended their meeting after testing DefensePro and there I saw presentation on link load balancer, application load balancer, XML check offloading, etc. devices.

    Have a look at their website at www.radware.com. I think the name of link load balancer is link proof.

  • I love Peplink's (peplink.com) link balancing router. We tried the Barracuda as we really like their Web and Email appliances, but it was too new and couldn't really handle the advanced situations we needed. Plus, the peplink has a really slick interface.

    ctennis : I add a vote for Peplink. It's very easy to setup and highly configurable.
    From Brett G
  • We have looked a lot of appliances and the best one we found that not only provides enterprise class features but also a reasonable price is the EdgePRO from XRoads Networks.

  • We checked out a bunch including Peplink and Barracuda and were not selected as their feature set was not the greatest. We settled on Elfiq which quite frankly delivers what we need and their support team actually responds with answers (unlike some other vendors which shall remain nameless!). This box also fails to wire should it fail so you don't add a point of failure in your redundancy - kind of important no?

    From George

"reset" functionality in Windows SteadyState

I am running some upgrade testing on a computer with Windows SteadyState.

The workflow is "upgrade, write down results, reset to default condition".

However, I can't seem to dig up a button that says, "Reset Windows Now!", which would be handy.

Is there such a thing?

Thanks!

  • I believe that the only way to revert back to the original system state in steady state is to reboot the computer.

    From RascalKing
  • You should be able to script this with something like this (setting CurrentMode to WDP_MODE_DISCARD), but AFAIK you would then also have to reboot:

    set objWbemServices = GetObject ("winmgmts:\\" & strComputer & "\root\wmi")
    set setWdpObjects   = objWbemServices.ExecQuery ("SELECT * FROM WDP_Control")
    
    for each objWdp in setWdpObjects
       objWdp.CurrentMode  = WDP_MODE_DISCARD
       objWdp.Put
    next
    

    More info here.

    From Adam Brand

Ubuntu file permissions

I'm having some trouble with file permissions on an Ubuntu server. I'm using WinSCP to move files to the server. The server will work fine, and then after a while it appears that I no longer have permission to delete a file.

I'm connecting to the server using an account called svadmin, and the root directory of the Apache server is /var/www. Each website has it's own directory under this - i.e.

/var/www/site1
/var/www/site2

This is the output from the ls command...

cd /var/www
ls -al
drwxr-sr-x   4  svadmin  svadmin  4096 2009-06-12 14:45 .
drwxr-xr-x  15  root     root     4096 2009-05-05 15:47 ..
drwxr-sr-x   4  svadmin  svadmin  4096 2009-06-12 15:15  site1
drwxr-sr-x   4  svadmin  svadmin  4096 2009-06-12 15:15  site2

My understanding is that this mean the directory owner has read/write/execute? When I connect to the server using the svadmin account, shouldn't I be able to overwrite or delete files in /var/www/site1 or /var/www/site2?

I'm not very familiar with linux file/directory permissions, so have been struggling to work out what I should be doing. Any help would be greatly appreciated!

More info: (thanks for the quick replies!)

Output of ls -al for /var/www/site1

drwxr-sr-x 4 svadmin svadmin 4096 2009-06-12 15:15 .
drwxr-sr-x 4 svadmin svadmin 4096 2009-06-12 14:45 ..
-rw-r--r-- 1 svadmin svadmin 157  2009-05-12 13:23 error.php
-rw-r--r-- 1 svadmin svadmin 158  2009-05-12 13:23 .htaccess
-rw-r--r-- 1 svadmin svadmin 142  2009-05-12 13:23 index.php
drwxr-sr-x 2 svadmin svadmin 4096 2009-05-12 18:40 libraries

Error message When I try and delete the file:

rm admin.php
rm: cannot remove 'admin.php' : Read-only file system

Even more info Just to add some possibly relevant information... everything was working until yesterday afternoon. At that point a coworker took out the SAN that the virtual machine file was on, and the web server had a less than graceful shutdown.

  • If you have rwx on a directory, that means you can edit the directory file, which amounts to removing and adding files. Editing files is a function of their individual permissions. What does an ls -l of one of the subdirectories look like?

    Matt : Thanks - I added the output...
  • CHMOD

    Section: User Commands (1)

    NAME

    chmod - change file mode bits

    SYNOPSIS

    chmod [OPTION]... MODE[,MODE]... FILE... chmod [OPTION]... OCTAL-MODE FILE... chmod [OPTION]... --reference=RFILE FILE...

    DESCRIPTION

    This manual page documents the GNU version of chmod. chmod changes the file mode bits of each given file according to mode, which can be either a symbolic representation of changes to make, or an octal number representing the bit pattern for the new mode bits.

    The format of a symbolic mode is [ugoa. . .][[+-=][perms. . .]. . .], where perms is either zero or more letters from the set rwxXst, or a single letter from the set ugo. Multiple symbolic modes can be given, separated by commas.

    A combination of the letters ugoa controls which users' access to the file will be changed: the user who owns it (u), other users in the file's group (g), other users not in the file's group (o), or all users (a). If none of these are given, the effect is as if a were given, but bits that are set in the umask are not affected.

    The operator + causes the selected file mode bits to be added to the existing file mode bits of each file; - causes them to be removed; and = causes them to be added and causes unmentioned bits to be removed except that a directory's unmentioned set user and group ID bits are not affected.

    The letters rwxXst select file mode bits for the affected users: read (r), write (w), execute (or search for directories) (x), execute/search only if the file is a directory or already has execute permission for some user (X), set user or group ID on execution (s), restricted deletion flag or sticky bit (t). Instead of one or more of these letters, you can specify exactly one of the letters ugo: the permissions granted to the user who owns the file (u), the permissions granted to other users who are members of the file's group (g), and the permissions granted to users that are in neither of the two preceding categories (o).

    A numeric mode is from one to four octal digits (0-7), derived by adding up the bits with values 4, 2, and 1. Omitted digits are assumed to be leading zeros. The first digit selects the set user ID (4) and set group ID (2) and restricted deletion or sticky (1) attributes. The second digit selects permissions for the user who owns the file: read (4), write (2), and execute (1); the third selects permissions for other users in the file's group, with the same values; and the fourth for other users not in the file's group, with the same values.

    chmod never changes the permissions of symbolic links; the chmod system call cannot change their permissions. This is not a problem since the permissions of symbolic links are never used. However, for each symbolic link listed on the command line, chmod changes the permissions of the pointed-to file. In contrast, chmod ignores symbolic links encountered during recursive directory traversals. SETUID AND SETGID BITS chmod clears the set-group-ID bit of a regular file if the file's group ID does not match the user's effective group ID or one of the user's supplementary group IDs, unless the user has appropriate privileges. Additional restrictions may cause the set-user-ID and set-group-ID bits of MODE or RFILE to be ignored. This behavior depends on the policy and functionality of the underlying chmod system call. When in doubt, check the underlying system behavior.

    chmod preserves a directory's set-user-ID and set-group-ID bits unless you explicitly specify otherwise. You can set or clear the bits with symbolic modes like u+s and g-s, and you can set (but not clear) the bits with a numeric mode. Section: User Commands (1) NAME chmod - change file mode bits SYNOPSIS chmod [OPTION]... MODE[,MODE]... FILE... chmod [OPTION]... OCTAL-MODE FILE... chmod [OPTION]... --reference=RFILE FILE... DESCRIPTION This manual page documents the GNU version of chmod. chmod changes the file mode bits of each given file according to mode, which can be either a symbolic representation of changes to make, or an octal number representing the bit pattern for the new mode bits.

    The format of a symbolic mode is [ugoa. . .][[+-=][perms. . .]. . .], where perms is either zero or more letters from the set rwxXst, or a single letter from the set ugo. Multiple symbolic modes can be given, separated by commas.

    A combination of the letters ugoa controls which users' access to the file will be changed: the user who owns it (u), other users in the file's group (g), other users not in the file's group (o), or all users (a). If none of these are given, the effect is as if a were given, but bits that are set in the umask are not affected.

    The operator + causes the selected file mode bits to be added to the existing file mode bits of each file; - causes them to be removed; and = causes them to be added and causes unmentioned bits to be removed except that a directory's unmentioned set user and group ID bits are not affected.

    The letters rwxXst select file mode bits for the affected users: read (r), write (w), execute (or search for directories) (x), execute/search only if the file is a directory or already has execute permission for some user (X), set user or group ID on execution (s), restricted deletion flag or sticky bit (t). Instead of one or more of these letters, you can specify exactly one of the letters ugo: the permissions granted to the user who owns the file (u), the permissions granted to other users who are members of the file's group (g), and the permissions granted to users that are in neither of the two preceding categories (o).

    A numeric mode is from one to four octal digits (0-7), derived by adding up the bits with values 4, 2, and 1. Omitted digits are assumed to be leading zeros. The first digit selects the set user ID (4) and set group ID (2) and restricted deletion or sticky (1) attributes. The second digit selects permissions for the user who owns the file: read (4), write (2), and execute (1); the third selects permissions for other users in the file's group, with the same values; and the fourth for other users not in the file's group, with the same values.

    chmod never changes the permissions of symbolic links; the chmod system call cannot change their permissions. This is not a problem since the permissions of symbolic links are never used. However, for each symbolic link listed on the command line, chmod changes the permissions of the pointed-to file. In contrast, chmod ignores symbolic links encountered during recursive directory traversals. SETUID AND SETGID BITS chmod clears the set-group-ID bit of a regular file if the file's group ID does not match the user's effective group ID or one of the user's supplementary group IDs, unless the user has appropriate privileges. Additional restrictions may cause the set-user-ID and set-group-ID bits of MODE or RFILE to be ignored. This behavior depends on the policy and functionality of the underlying chmod system call. When in doubt, check the underlying system behavior.

    chmod preserves a directory's set-user-ID and set-group-ID bits unless you explicitly specify otherwise. You can set or clear the bits with symbolic modes like u+s and g-s, and you can set (but not clear) the bits with a numeric mode.

    Anthony Lewis : This seems a little harsh...
    FerranB : I think is a bad idea to cut&paste a man page. A link to a chmod man page gives better answer.
    Matt : Thanks for the information, I really appreciate it. I'd come across several articles similar to this while searching before I asked on here. I understand the structure of the formatted letters, and the other concepts. That's just it! My subdirectory has -rw and the owner is the account I am using. Shouldn't I have read and WRITE access? I apologise if this seems really easy, and know that people get sick of answering similar questions relating to file permissions. I also suspect that I'm not the only new person to linux wondering why this permission system is so frustrating to learn!
    theotherreceive : The account owns all the files in question - chmod isn't really relevant here.
    From Yordan
  • Your problem is the sticky bit. Notice that the perms there aren't drwxr-xr-x, they are drwxr-sr-x. Wikipedia says:

    The most common use of the sticky bit today is on directories, where, when set, items inside the directory can be renamed or deleted only by the item's owner, the directory's owner, or the superuser; without the sticky bit set, any user with write and execute permissions for the directory can rename or delete contained files, regardless of owner. Typically this is set on the /tmp directory to prevent ordinary users from deleting or moving other users' files. This feature was introduced in 4.3BSD in 1986 and today it is found in most modern Unix systems.

    So, files are only writable and deletable by the group that put them there in the first place.

    Insyte : The "sticky" bit is not set, rather it's the "setgid" bit. The setgid bit forces files dropped into the directory to be owned by the same group as the directory. The sticky bit is indicated by a 't' in the last position of the perms, like so:
    $ mkdir test
    $ chmod a+t test
    $ ls -ld test
    drwxr-xr-t 2 insyte staff 6 2009-07-16 16:52 test
    
    From Bill Weiss
  • This isn't a permissions problem. The two clues are:

    • rm: cannot remove 'admin.php' : Read-only file system
    • everything was working until yesterday afternoon. At that point a coworker took out the SAN that the virtual machine file was on, and the web server had a less than graceful shutdown.

    Somehow the filesystem containing /var/www dropped to "read only" probably when the SAN went away. The output of the mount command should identify this filesystem with a (ro) flag at the end.

    The fix is to figure out why it happened, make sure it's corrected, and remount the filesystem rw with this command:

    mount -oremount,rw $filesystem

    Chad Huneycutt : It sounds like /var/www is remotely mounted (on the aforementioned SAN), in which case, the debugging may need to happen on the fileserver. That would mean that the server in question shows the filesystem mounted read/write, but the fileserver has the filesystem read-only.
    Matt : Thanks - the problem was related to the SAN issue, and mounting the drive seemed to solve the problem. Thanks again!
    Insyte : Glad to hear it worked.
    Insyte : Or is fixed, rather.
    From Insyte