link.net!news.sprintlink.net!Sprint!131.103.1.114!news1.chicago.cic.net!iagnet.net!infeed1.internetmci.com!newsfeed.internetmci.com!202.216.224.169!news.dti.ad.jp!tokio!spinnews!spin-hsd1-tky!nsggate.sgi.co.jp!news.nsg.sgi.com!news.corp.sgi.com!mew.corp.
sgi.com!pablo
Subject: Sybase FAQ: 13/16 - section 10.3
Date: 1 Sep 1997 06:11:21 GMT
Summary: Info about SQL Server, bcp, isql and other goodies
Posting-Frequency: monthly

Archive-name: databases/sybase-faq/part13
URL: http://reality.sgi.com/pablo/Sybase_FAQ

                                   Q10.3.1

                  SYBASE Technical News, Volume 5, Number 1
                               February, 1996

This issue of SYBASE Technical News contains new information about your
Sybase software. If needed, duplicate this newsletter and distribute it to
others in your organization. All issues of SYBASE Technical News and the
troubleshooting guides are included on the AnswerBase CD. Send comments to
technews@sybase.com. To receive this document by regular email, send name,
full internet address and customer ID to technews@sybase.com.

In this Issue

Tech Support News/Features

   * 1996 Technical Support North American Holiday Schedule

SQL Server 11.0

   * Database "Online" Questions
   * Dump Compatibility Issues
   * Lock Promotion Changes in SQL Server 11.0
   * Changes to Thresholds Behavior in SQL Server 11.0

SQL Server General

   * Threshold Process Fails to Fire
   * Maximum Timestamp Value
   * Dumping Multiple Products to One Tape
   * Dump Transaction and Error 4207
   * Trace Flags 322 and 323 in 4.9.x & 10.x
   * Installing Async I/O on HP-UX 10.0 via SAM
   * Ultrix: 4.4 vs. 4.3a Compatibility

Connectivity / Tools / PC

   * Latest Rollups for PC Platforms

Certification and Bug Reports

   * Latest Rollups for SQL Server

1996 Technical Support North American Holiday Schedule

Sybase Technical Support is open on all holidays and provides full service
on many. During the limited-service holidays shown below, Technical Support
will provide the following coverage:

   * SupportPlus Preferred and Advantage customers may log all cases; we
     will work on priority 1 and 2 cases over the holiday.
   * 24x7 and 24x5 Support customers may log priority 1 cases; we will work
     on these over the holiday.
   * SupportPlus Standard, Desk Top, and Regular Support customers may
     purchase Extended-hour Technical Support for coverage over the holiday.

 Table 1: Sybase Technical Support limited service
             holidays - U.S. customers

          Holiday                    Date
 New Year's Day           January 1
 President's Day          February 19
 Memorial Day             May 27
 Independance Day         July 4
 Labor Day                September 2
 Thanksgiving             November 28
 Christmas                December 25

 Table 2: Sybase Technical Support limited service
           holidays - Canadian customers

          Holiday                    Date
 New Year's Day           January 1
 Good Friday              April 5
 Victoria Day             May 20
 Canada Day               July 1
 Labour Day               September 2
 Canadian Thanksgiving    October 11
 Christmas Day            December 25
 Boxing Day               December 26

If you have questions, please contact Technical Support.

Database "Online" Questions

Sybase SQL Server release 11.0 includes a new online database command. This
article contains a few commonly asked questions about the "online" state of
a database and the use of the online database command. For more information,
please see What's New in Sybase SQL Server Release 11.0?, the System 11 SQL
Server Migration Checklist supplement to this Sybase Technical News, and the
SQL Server Reference Manual.

Question

Executing a load database leaves the database offline; does load transaction
leave a database online or not?

Answer

load transaction leaves the database the way it found it: if it was offline,
it remains offline; if it was online, it comes online again. A customer
doing a sequence of load database, load tran, load tran ... will have to use
the online database command at the end of the load sequence. A customer who
has already brought the database online and who then loads another
transaction dump will not have to repeat online database.

Question

If a database becomes corrupt during boot time recovery and I am able to fix
whatever caused the problem, can I just bring the database online with the
online command, or will I have to reboot SQL Server?

Answer

Using the online database command in this context will bring the database
online; you won't have to reboot. Note, however, that if the database has
been marked suspect, online database will have no effect.

Question

Can I run dbcc checkalloc or tablealloc with the fix option when a database
is offline, instead of having to put the database in single user mode?

Answer

Yes. You can also do any other dbcc commands in an offline database.

Question

What causes a database to go offline?

Answer

There are three things that take a database offline:

   * load database. This causes a "persistent" offline state, that is, the
     database stays offline until you issue an online database command.
   * load transaction. As mentioned above, if the database was online before
     the load transaction, it comes back online automatically; if it was
     offline, it stays offline.
   * Database recovery. This causes a "temporary" offline state, that is,
     the database comes back online automatically when recovery is finished,
     unless an error occurs during recovery.

     ------------------------------------------------------------------
     Note
     The "persistent" offline state overrides a "temporary" offline
     state. Thus, if you do a load database followed by a load
     transaction, the offline state set up by the load database
     persists after the load transaction. That's the mechanism by which
     load transaction "leaves the database the way it found it" as in
     the first question above. That's also the way SQL Server detects
     that a database should remain offline even if the server should
     crash while the database is in a load sequence.
     ------------------------------------------------------------------

Dump Compatibility Issues

Question

What dumps are compatible between SQL Server 11.0 and earlier releases?

Answer

The following table shows what SQL Server releases can read dumps from other
SQL Server releases.

  Destination -->
      Source        SQL Server     SQL Server    SQL Server     SQL Server
         v             4.9.x         10.0.x         10.1           11.x
 Dump Level 4.9.x Yes            No             No            No
 Dump Level 10.0.xNo             Yes            Yes           Yes
  Dump Level 10.1 No             No             Yes           Yes
  Dump Level 11.x No             No             No            Yes

     ------------------------------------------------------------------
     Note
     This table applies to the compatibility of physical dumps
     themselves, rather than Backup Server compatibility. Backup Server
     releases are only compatible with the SQL Server of the same or
     previous release level.
     ------------------------------------------------------------------

Here are a few important issues regarding dump compatibility between SQL
Server releases.:

   * In general, you should never assume backward compatibility: a
     lower-numbered version of a product probably can't read a
     higher-numbered version's files. However, higher-numbered versions
     should be able to read files from lower-numbered versions. You can dump
     from 10.0.2 and load to 10.1.
   * 10.1 dump headers have a field in them that can't be read by 10.0.x
     servers. The new "log version" field was added so that System 11 could
     distinguish a 10.1 release database by reading the dump header. 10.0.x
     doesn't recognize this, so 10.0.x can't read 10.1 dumps.
   * While SQL Server can read several flavors of log record, it can only
     write log records at its own release level; and if it writes an 11.0
     log record into a section of log that currently contains 10.x records,
     the new records will not be readable during a subsequent load or
     boot-time recovery because the log reader expects all log records to be
     in a single format until SQL Server tells it to switch formats.

Consequently, when you start up SQL Server 11, one of two things will
happen:

   * If boot-time recovery succeeds, your databases will be online, but SQL
     Server will refuse to do a logged dump transaction until you do a dump
     database. This means that until you dump database, you can only do dump
     transaction with no_log or with truncate_only, because other kinds of
     dump transaction actually write some log records. truncate_only is
     allowed because the database is online and therefore writable, and
     since you aren't trying to dump log records to be read later it's okay
     to write an 11.x log record.
   * If boot-time recovery fails for some reason, or if the database is
     replicated (in which case it will be offline only until replication is
     finished), you can only do dump transaction with no_log or with
     no_truncate. The database is offline and unavailable for writing, but
     no_truncate is allowed because it is not a logged operation and you or
     Sybase Technical Support (in the case of recovery failure) may find a
     copy of the log useful. If you are able to bring the database online by
     executing online database, you will still have to dump database before
     you can do a logged operation.

In both of these cases, you will see Error 4225:

        This database has not been dumped since it was created or
        upgraded.  You must perform a dump database before you can dump its
        transaction log.

You may also see Error 4226:

        Logged DUMP TRANSACTION cannot run in database %.*s, because that
        database's log version (%d) disagrees with the SQL Server's log
        version (%d); use DUMP TRANSACTION WITH NO_LOG. Versions will agree
        once ONLINE DATABASE has run.

To clear this condition, again, just do dump database.

Lock Promotion Changes in SQL Server 11

Question

Is lock promotion calculated on a per statement basis, or on a per
transaction basis? I have a single process executing several thousand update
statements to the same table in a transaction, and the EX_PAGE locks are not
being promoted to a table-level lock. Is this the behavior I should expect?

Answer

Lock promotion is performed on a per-statement basis. Additionally, there
may be some complex statements for which lock promotion will not be
performed at all. We do not take the transaction into account.

In System 11, however, you can configure the lock promotion threshold for
tables, databases, and servers. This will give you the ability to guarantee
that updates use table level locks. For more information about setting the
lock promotion threshold, please see Chapter 11, "Locking on SQL Server," in
the SQL Server Performance and Tuning Guide.

Changes to Thresholds Behavior in SQL Server 11.0

In SQL Server 11.0, there are some important changes in the way that
thresholds behave on the log segment.

Thresholds and the syslogshold Table

In the situation where you or a threshold process cannot truncate the log
because of an open transaction, SQL Server 11.0 provides a new table,
syslogshold, which includes information on what process has the open
transaction. You can use information in this table to set up a threshold
process to deal with the open transaction.

For more information on the syslogshold table, please see the SQL Server
Reference Supplement. For information about using it in conjunction with
threshold procedures, see the chapter "Managing Free Space with Thresholds"
in the System Administration Guide.

Private Log Cache Feature

As a performance enhancement, 11.0 introduces the Private Log Cache (PLC).
This feature maintains log records in memory, rather than writing them
directly to the log. However, SQL Server must guarantee that space is
available in the log to flush these records when the time comes.
Consequently, the PLC "reserves" log pages that is, it marks space as used
before actually writing to it. Presently, this reservation equals three
pages per open transaction per database.

This reserved space counts as "used pages" toward triggering the threshold
procedure, even though under some circumstances that space won't actually be
written. The effect for you is that in the log segment, thresholds will
trigger sooner than it looks like they should: there may appear to be more
empty space in the log segment than the threshold claims there is. This
occurs because SQL Server reports pages actually written, while it acts on
pages reserved.

This will be particularly noticeable if your site has many users with open
transactions in a single database.

     ------------------------------------------------------------------
     Note
     Remember that the rule is three pages reserved per transaction per
     database: a thousand users with a transaction open in one database
     will have reserved 3000 pages in the log for that database.
     ------------------------------------------------------------------

Threshold Process Fails to Fire

Question

Sometimes a threshold process that dumps the transaction log when the
threshold is overrun fails to actually do the dump tran. Why did it fail?

Answer

There are many reasons the threshold procedure can fail to start. SQL Server
checks all of the following, in order:

  1. That it can allocate memory for the information the threshold procedure
     will need.
  2. That it can create a task to execute the procedure.
  3. That the threshold procedure can use the database.
  4. That the systhresholds table exists.
  5. That there is an entry in systhresholds for the threshold in question.
  6. That the threshold owner is a valid user in the database.
  7. If the database is usable by the database owner only, that the user who
     bound the threshold is the database owner.
  8. That the threshold owner is a valid server login.
  9. That the procedure named in the threshold is valid.

The step that concerns us here is step 3. If your procedure failed, check to
be sure that the database is not in single-user mode. If a database is in
single-user mode, and the user overruns a threshold, the threshold process
won't succeed in doing any work in that database, because the maximum number
of users allowed in the database is one and the user who overran the
threshold is that one.

The threshold procedure must be in the database before it starts work; if it
cannot use the database, SQL Server prints Error 7403 and stops:

Threshold task could not use database %d, and so
cannot execute the threshold procedure for segment
%d, free space %ld.

This behavior affects all threshold procedures, regardless of whether they
do any work that is not directly involved with the database.

Maximum Timestamp Value

Question

Is there a maximum value for the timestamp in the log? If there is, what
happens when that number is reached?

Answer

Yes, there is a maximum timestamp value. If we ever reach that value, the
next assigned timestamp value would be 0 (not 1). If the timestamp does roll
over, corruption might result; however, before this happens, you will be
warned.

SQL Server attempts to warn you that the database is approaching the maximum
timestamp value by checking that the current database timestamp value is
less than 0xffff 0xfefffffff. This validation check occurs each time the
dbtable is created for the database. If the timestamp is greater than 0xffff
0xfefffffff, SQL Server raises Error 935:

WARNING - the timestamp in database `%.*s' is approaching the maximum allowed.

When you receive Error 935, you should do one of the following as soon as
possible:

   * Create a new database the same size as the old, execute sp_dboption
     "select into" for that database, then use select into to recreate the
     tables.
   * Bulk copy your data out, drop and re-create the database, then bulk
     copy the data back in.

Explanation

Sybase timestamps are an unsigned 48-bit number; that is, they can hold
integer values from 0 through 248 -1 or 281,474,976,710,655. In recovery,
SQL Server decides what has and hasn't been done by comparing timestamps; if
the timestamp on the page is smaller than the one in the log record, then
this change hasn't been made, so SQL Server makes it.

If the timestamp were ever to roll over, the timestamp in a log record might
well be something like 0xffff.ffffffac, and the timestamp on the page
something like 0x0000.00000013. Logically, the timestamp on the page should
be later than the one in the log, because the timestamps have wrapped
around, but SQL Server won't detect that. All SQL Server knows is that the
page timestamp is smaller than the log record timestamp, so it will make the
change specified by the log record, thus corrupting the data page.

Bear in mind that timestamp can take quite a long time to roll over. For
instance, suppose that your server makes a change in the database every
millisecond (1000 changes per second), all day every day. At that rate, it
will take more than 8,900 years for the timestamp to roll over. (248 / (1000
* 60 * 60 * 24 * 365.25) = 8919.4) However, improper use of dbcc rebuild_log
or dbcc save_rebuild_log could result in the timestamp reaching the maximum
value. When this occurs, SQL Server will generate Error 6901:

Overflow on High component of timestamp occurred in database %d.  Database table possibly corrupt.

Again, if you receive this error, use one of the methods described above to
re-create your database.

     ------------------------------------------------------------------
     Note
     This information applies to SQL Server 11.0 as well as to previous
     releases.
     ------------------------------------------------------------------

Dumping Multiple Products to One Tape

Question

I want to back up all my databases, both Sybase and Oracle, onto the same
tape each night in a batch. How do I do this?

Answer

You can't. Backup Server expects a tape to conform to a format similar to
ANSI standard, where the first record on the tape is a "volume header",
followed by some "file header" records. Because other vendors do not try to
use ANSI standard format, Backup Server cannot use tapes that contain other
vendors' data.

Sybase has no plans to change this in the foreseeable future.

Dump Transaction and Error 4207

Question

When I try to dump transaction in my database, I get error 4207, which says
that dump transaction is not allowed while the select into/bulk copy option
is enabled. The problem is, that option isn't actually enabled in that
database. What happened?

Answer

What you're seeing is the old text of error 4207. It caused a lot of
confusion, so Sybase changed it. As of SQL Server 11.x, it says:

Dump transaction is not allowed because a non-logged operation was performed on the database.
Dump your database or use dump transaction with truncate_only until you can dump your database.

What happened to you is one of several things:

   * You performed a fast (unlogged) bulk copy into your database.
   * You performed a select into a table in your database.
   * You used dump transaction with truncate_only or no_log in your
     database.

All of these operations make changes that are not recorded in the
transaction log. Because the log doesn't have a record of those changes, it
isn't possible to rebuild your database from only the log. Thus, as soon as
one of those things happen in a database, SQL Server disallows dump
transaction until the next dump database has been performed. The database
dump gives the server the necessary information to re-create those changes,
so it is then possible to use transaction log dumps to re-create the rest of
the changes.

Trace Flags 322 and 323 in 4.9.x & 10.x

Question

I have two 4.9.2 SQL Servers running at different Rollup levels. Certain
queries containing subqueries run much more slowly on the newer Rollup than
on the older one. The showplan output reveals that the newer Rollup uses a
four-step plan where the older one uses only two steps. I have heard that
two SQL Server trace flags, 322 and 323, might help me. What do these trace
flags do?

Answer

A fix for an optimizer bug, 13495 (also listed as 17230), changed the
behavior of the optimizer. 13495 ensures that subqueries under ors are not
unnested, so that correct results are returned in all cases. The following
table lists the platforms and Rollup numbers in which this change first
appeared:

             Platform            4.9.2. Rollup Number
 SunOS Release 4.x (BSD)         4152
 HP 9000 Series 800 HP-UX        4153
 IBM RISC System/6000 AIX        4154
 AT&T System 3000 UNIX SVR4 MPRAS4155
 Digital OpenVMS VAX             4156
 Sun Solaris 2.x                 4157
 Digital OpenVMS Alpha 1.5       4158
 Digital UNIX                    n/a

If you are running one of these Rollups, or any Rollup released later, you
may find that you now get poor performance with certain queries containing
subqueries. To fix this, you can try using trace flags 322 or 323. Trace
flag 322 is to be used at SQL Server startup; trace flag 323 is the
interactive version, invoked with the command dbcc traceon(323). Both trace
flags disable the fix for bug 13495.

     ------------------------------------------------------------------
     WARNING!
     Disabling this fix may return performance for these queries to
     your previous levels, but may also cause SQL Server to return two
     sorts of incorrect results:

        * Because subqueries are processed as joins, existence checks
          may
          return duplicates where they should not.
        * The construction or x in this statement:
          select a from b where...
          will not work when b is empty.

     ------------------------------------------------------------------

If you are running SQL Server release 10.x and encountering this particular
performance problem, you may use EBFs 4428 and higher, which include these
trace flags.

These trace flags are not available in SQL Server 11.0 because subquery
processing has been substantially rewritten.

Installing Async I/O on HP-UX 10.0 via SAM

Release 10.0 of HP-UX allows for the installation of the asynchronous driver
using SAM (System Administration Manager). This alleviates the need to run
the installasync script which is currently shipped with Sybase SQL Server
Releases 10.0 and below for HP- UX. To install async with SAM, follow these
steps:

  1. Invoke SAM and select Kernel Configuration.
  2. Select Drivers within the Kernel Configuration menu.
  3. Change the Pending State for asyncdsk to In.
  4. Rebuild the Kernel and Reboot the system (using the Actions menu
     option).
  5. Execute the following statements as "root":

     /etc/mknod /dev/async c 101 5

     chmod 0660 /dev/async

     chown sybase /dev/async

     chgrp sybase /dev/async

          -------------------------------------------------------------
          Note
          Step 5 may also be performed prior to step 1.
          -------------------------------------------------------------

In fact, as of System 11, Sybase will no longer ship an installasync script.
Instead, use the steps described above.

If you have questions abut SAM, please call HP Technical Support.

Ultrix: 4.4 vs. 4.3a Compatibility

Sybase has determined that Digital Ultrix 4.4 is not binary compatible with
Ultrix 4.3a, and we do not recommend that customers run the 4.2 SQL Server
under Ultrix 4.4. Sybase has no plans to recompile the SQL Server on Ultrix
4.4; any further issues relating to the compatibility of Ultrix 4.4 should
be addressed directly to Digital Equipment Corp.

Latest Rollups for SQL Server

The following table lists the latest Rollups for SQL Server on all
platforms.

                Platform               Release Number Rollup Number
 Sun OS Release 4.x (BSD)              10.0.2         5539
 Sun OS Release 4.x (BSD)              4.9.2          5631
 HP 9000 Series 800 HP-UX              10.0.2         5540
 HP 9000 Series 800 HP-UX              4.9.2          5632
 IBM RISC System/6000 AIX              10.0.2         5541
 IBM RISC System/6000 AIX              4.9.2          5633
 AT&T (NCR) System 3000 UNIX SVR4 MPRAS10.0.2         5542
 AT&T (NCR) System 3000 UNIX SVR4 MPRAS4.9.2          5634
 Digital VAX OpenVMS                   10.0.2         5543
 Digital VAX OpenVMS                   4.9.2          5635
 Sun Solaris 2.x                       10.0.2         5544
 Sun Solaris 2.x                       4.9.2          5636
 Digital OpenVMS Alpha 1.5             10.0.2         5545
 Digital OpenVMS Alpha 1.5             4.9.2          5637
 Digital OSF/1                         10.0.2         5549
 Digital OpenVMS Alpha 1.0             4.9.2          5637
 Novell Netware                        10.0.2         5547
 OS/2                                  10.0.2         4726
 Windows NT                            10.0.2         5546

In the case of 4.9.2 Rollups, there are subsequent one-off SWRs (formerly
EBFs) that you may order for specific bug fixes, but Sybase recommends you
upgrade rather than ordering a one-off SWR at this point.

Latest Rollups for PC Platforms

The following tables list the latest rollups of Sybase products other than
SQL Server for PC platforms.

              Table 3: EBFs for DOS Platforms

          Product Group          Release Number SWR Number
 Open Client/C Developers Kit    10.0.2         4907
 DB-Library                      4.2.5          4987
 DB-Library                      4.2.6          4987
 Net-Library FTP PC/TCP          1.0.3          3666
 Net-Library Microsoft TCP       1.0.3          5408
 Net-Library Named Pipes         1.0.2          2387
 Net-Library Novell IPX/SPX      1.0.2          3661
 Net-Library Novell LAN Workplace1.0.3          3665
 SQR Workbench                   2.5            2489
 SQR Execute                     2.5            2490

          Table 4: EBFs for Netware Platforms

         Product Group        Release Number SWR Number
 Open Client/C Developer's Kit10.0.3         5509
 DB-Library                   4.6            2849
 Replication Server           10.1           4922

              Table 5: EBFs for OS2 Platforms

          Product Group          Release Number SWR Number
 Open Client/C Developer's Kit   10.0.3         5677
 DB-Library                      10.0.2         4146
 DB-Library                      4.2            4721
 Net-Library Named Pipes         1.0.2          3904
 Net-Library Named Pipes         2.0            2698
 Net-Library Novell SPX/IPX      1.0.2          3982
 Net-Library Novell IPX/SPX      2.0            1612
 Net-Library Novell LAN WorkPlace1.0.2          2534
 Net-Library IBM TCP             10.0.1         3184
 SQR                             2.4            1822
 Open Server                     10.0.2         3905
 Open Server                     2.0            3436

          Table 6: EBFs for PC Windows Platforms

          Product Group          Release Number SWR Number
 Open Client/C                   10.0.3         5735
 DB-Library                      4.2.5          5185
 Net-Library FTP PC/TCP          10.0.1         3777
 Net-Library Named Pipes         10.0.1         5303
 Net-Library NEWT                1.0.3          3158
 Net-Library Novell LAN WorkPlace1.0.2          2472
 Net-Library WinSock             1.0.3          5146
 ODBC                            10.0.1         5736
 Embedded SQL/C Precompiler      10.0.2         4653
 Embedded SQL/Cobol Precompiler  10.0.2         4269
 Embedded SQL/C Precompiler      4.0.4          3607
 SQL Monitor Client              10.1.2         4723
 SQR Workbench                   2.5            2492
 SQL Server Manager              10.3           5099

         Table 7: EBFs for Windows NT Platforms

         Product Group        Release Number SWR Number
 Open Client/C Developer's Kit10.0.3         5655
 Replication Server           10.1           5112
 Open Server                  10.0.3         5513
 Manager Server               10.1           4117
----------------------------------------------------------------------------

Disclaimer: No express or implied warranty is made by Sybase or its
subsidiaries with regard to any recommendations or information presented in
SYBASE Technical News. Sybase and its subsidiaries hereby disclaim any and
all such warranties, including without limitation any implied warranty of
merchantability of fitness for a particular purpose. In no event will Sybase
or its subsidiaries be liable for damages of any kind resulting from use of
any recommendations or information provided herein, including without
limitation loss of profits, loss or inaccuracy of data, or indirect special
incidental or consequential damages. Each user assumes the entire risk of
acting on or utilizing any item herein including the entire cost of all
necessary remedies.

Staff

Principal Editor: Leigh Ann Hussey

Contributing Writers:
Lance Andersen, Peter Dorfman, Sekhar Prabhakar, Loretta Vibberts, Elton
Wildermuth

Send comments and suggestions to:

SYBASE Technical News
6475 Christie Avenue
Emeryville, CA 94608

or send mail to technews@sybase.com

Copyright 1996  Sybase, Inc. All Rights Reserved.

   
   
                            Q10.3.2: TECHNICAL NEWS
                                       
 Special Supplement
February, 1996

   
     _________________________________________________________________
   
   
   
   This supplement to the SYBASE Technical News contains important
   information for those about to migrate to the System 11 release. If
   needed, duplicate this supplement and distribute it to others in your
   organization. These changes are also documented in more detail in the
   SQL Server Reference Manual released with System 11, in What's New in
   Sybase SQL Server Release 11.0?, and in the Release Bulletin.
   
   SQL Server 11 Database and Application Migration Checklist
   
   If you plan to upgrade to SQL Server release 11.0, keep in mind the
   following changes and new features as you prepare your existing
   installation for the upgrade.
   
   Migration Issues from System 10 and Earlier Releases
     * Databases Online / Offline
     * New File for Configuration Values
     * Deadlock Checking Period
     * New page utilization percent Parameter
     * Query Processing Changes
     * Memory and Cache
     * sybsystemprocs Changes
     * New and Changed Error Messages
     * New Keywords
     * Backup Server Changes
     * New Output from System Stored Procedures
       
   
   
   Migration issues from releases prior to System 10 only
     * Dumps and Loads are Handled Differently
     * Remote Connections are Automatic
     * Backup Server Must be Started/Stopped
     * Backup Server in the Interfaces File
     * The null Device Goes Away
     * Changes to Dump Scripts
     * Loading from Multiple-Tape Devices
     * No dump tran to Device After Non-Logged Text Writes
     * No Dumps from Within Transaction
     * "Run" File Name Change
     * New Database for Stored Procedures
     * Thresholds
     * New Login/Password Protocols
     * New create table Permission
     * New numeric Datatype
     * Display Format Changes
     * Online Datatype Hierarchy
     * Conversion Changes
     * Change to set arithabort/arithignore
     * Changes in Subquery Processing
     * Change in Comment Protocol
     * Change in Permitted Syntax
     * No more NULL Column Headings
     * More Consistency Required for Correlation Names
       
   
   
   Last Minute Checklist
   
   Important Steps After the Upgrade
   
   _MIGRATION ISSUES FROM SYSTEM 10 AND EARLIER RELEASES_
   
   This section applies to you if you are migrating from SQL Server
   release 10.0, or any release prior to 10.0, to release 11.0.
   
   _DATABASES ONLINE / OFFLINE_
   
   Beginning with SQL Server release 11.0, as part of the automatic
   upgrade mechanism, a database has two states, online and offline.
   
   Issuing a load database command takes a database offline. If you have
   scripts that load databases, you must add an online database command
   to your script to make the database available again after the load
   sequence completes. A load sequence consists of a load database
   execution and a complete set of load transaction statements.
   
   _NEW FILE FOR CONFIGURATION VALUES_
   
   In releases prior to SQL Server release 11.0, SQL Server stored
   configuration values in the first page of the master device, known as
   the "config block". Most of these values have been moved to a flat
   file known as the configuration file. As you upgrade, there are
   several things to keep in mind regarding this file.
     * Backing up the master database does not back up the configuration
       file. You must either back up your configuration file separately
       and restore it when you load your database, or else, after loading
       a master database, you must issue sp_configure with the restore
       option and shutdown and restart SQL Server.
     * Some of the configuration parameters available through
       sp_configure have new names. For example, memory is now called
       total memory. If you run sp_configure memory, value, you will get
       this error: "Configuration option is not unique." If you have
       scripts that use sp_configure to set or report on configuration
       parameters, you will need to change them if the names have
       changed. You should test all your scripts that use parameter names
       listed in the table "New configuration parameter names" in Chapter
       3, "Changes That May Affect Existing Applications," of What's New
       in Sybase SQL Server Release 11.0?
     * The reconfigure command is no longer required after running
       sp_configure. Any of your scripts that include reconfigure will
       continue to work; the reconfigure command is ignored. You may want
       to consider removing reconfigure commands from your scripts now,
       however, as they may not be supported in future releases.
     * If you currently set a trace flag in your runserver file, it may
       now need to be set using sp_configure or by editing the
       configuration file. For example; if you use a trace flag to print
       deadlock information to your error log (-T1204 in your runserver
       file on UNIX), you must remove it from your runserver file and set
       deadlock information printing with sp_configure or by editing your
       configuration file. The 1204 trace flag in the runserver file will
       no longer work. You should check whether other trace flags you are
       currently using have been converted to the configuration file and
       reset them after you upgrade. Some of these trace flags are 1603,
       1610, and 1611. Trace flags that are now configuration parameters
       are listed in Table 3-2, "New configuration parameter names" in
       Chapter 3, "Changes That May Affect Existing Applications," of
       What's New in Sybase SQL Server Release 11.0?
     * The buildmaster executable no longer supports the -y and -r flags.
       You must use sp_configure or the configuration file in place of
       these flags. If you have scripts that use these flags, you must
       rewrite them using sp_configure, or save and edit copies of the
       configuration file.
       
   
   
   Refer to "Setting Configuration Parameters" in the System
   Administration Guide for information about the configuration file and
   sp_configure options.
   
   _DEADLOCK CHECKING PERIOD_
   
   SQL Server release 11.0 introduces a new configuration parameter,
   deadlock checking period. SQL Server performs deadlock checking after
   a minimum period of time for any process waiting for a lock to be
   released. By default, this minimum period of time is 500 milliseconds,
   but you can change this parameter with deadlock checking period. If
   you expect your applications to deadlock infrequently, you can reduce
   overhead cost by delaying deadlock checking. However, increasing
   deadlock checking period causes longer delays before deadlocks are
   detected.
   
   SQL Servers prior to release 11.0 initiated deadlock checks as soon as
   a task had to wait for a lock. To get the same behavior in release
   11.0, set the deadlock checking period to 0 in the configuration file.
   
   
   _NEW PAGE UTILIZATION PERCENT PARAMETER_
   
   A new configuration option, page utilization percent, saves time by
   allocating new extents rather than searching the OAM page chain. This
   makes page allocation faster, but may waste space.
   
   The default behavior of previous versions of SQL Server is always to
   search the OAM page chain for unused pages before allocating a new
   extent. To keep this behavior in SQL Server release 11.0, set the page
   utilization percent to 100 in the configuration file.
   
   _QUERY PROCESSING CHANGES_
   
   A new strategy known as MRU (most recently used or fetch-and-discard)
   is available to the optimizer. This strategy is used in cases where a
   large query would overwrite heavily used pages in cache.
   
   For example, on SQL Server releases prior to 11.0, users running small
   transactions often find the necessary pages in cache (avoiding
   physical I/O). When another user runs select * from a very large
   table, the cache is overwritten with pages from this table. This
   forces the other users' queries to use more physical I/O because their
   pages are no longer in cache.
   
   The MRU strategy is designed to reuse the same buffers in cache and to
   avoid overwriting all existing pages. You can execute set showplan on
   and then run your query to see what plan the optimizer chooses. If you
   see LRU (least recently used), your SQL Server is running with
   pre-11.0 behavior; if you see MRU, you are running with the new
   behavior. You may override the MRU plan if you are unhappy with its
   performance by using the lru option in queries, as described in the
   section on select in the SQL Server Reference Manual. To get pre-11.0
   behavior for your queries, construct it as follows:
   
   select * from table (index table prefetch 2 lru)
   
   _SUBQUERY PERFORMANCE CHANGES_
   
   Subquery handling has been improved for SQL Server release 11.0. For
   most subqueries, SQL Server 11.0 should give you better performance
   than 10.x. You may see different results in some cases because of bugs
   that were fixed in SQL Server release 11.0. Test all your subqueries
   under SQL Server release 11.0 before transferring production databases
   to the new system.
     _________________________________________________________________
   

     Note
     If you notice performance degradation after upgrade, check your
     data cache and increase it if necessary. SQL Server 11.0 needs
     more memory than 10.0, and will take memory from the data cache if
     it doesn't have enough, causing the cache to shrink.

   
   
   Subqueries are no longer allowed in updatable cursors.
   
   One type of subquery may be slower in 11.0 than 10.x: this involves an
   expression subquery where the outer table is very large and has few
   duplicate correlation values, the inner table is small, and the
   subquery contains an aggregate. In this case, the optimizer will not
   flatten the query to be processed as a join. Such a query might look
   like this:

        select * from huge_table where x=
                (select sum(a) from tiny_table
                where b = huge_table.y)

   
   
   To get faster results, you can reformulate the query to mimic the
   behavior of System 10, as follows:

        select huge_table.y, s=sum(a)
                into #t
                from huge_table, tiny_table
                where b=huge_table.y
                group by huge_table.y

        select huge_table.*
                from huge_table, #t
                where x=#t.s
                and huge_table.y=#t.y

   
   
   If you have stored procedures, triggers or views that contain
   subqueries, they will not automatically be upgraded to take advantage
   of the new subquery changes. Until you drop and re- create the
   compiled object, it will continue to behave as it did in earlier
   releases. Once you drop and re-create it you will see the new
   performance and results expected from an 11.0 subquery. If you are
   upgrading a test system you may want to rename your old stored
   procedure, create the new 11.0 style procedure and run tests to see
   the differences in performance and results.
     _________________________________________________________________
   

     _WARNING!_
     After upgrading a production system, drop, re-create, and test the
     behavior of all compiled objects containing subqueries. If you
     leave your procedure with the old behavior and then have to drop
     and re- create for some reason, you will see the new 11.0
     behavior. There is no way to go back to the old behavior once the
     procedure is dropped and re-created. If the new behavior does not
     work well for your production system, you are stuck. It is best to
     test out procedures that contain subqueries and make any changes
     necessary for your application before upgrading your production
     system.

   
   
   Once the upgrade is complete, drop and re-create all compiled objects
   containing subqueries.
   
   To determine which compiled objects contain subqueries and whether
   they are running at 11 level or pre-11 level, use the stored procedure
   sp_procqmode. Refer to the SQL Server Reference Manual for details.
   
   The output for set showplan on has been changed. If you have any
   applications that rely on the output generated by this command, they
   may need to be changed.
   
   set dup in subquery No Longer Supported
   
   The set dup in subquery command introduced in SQL Server 10.0 will no
   longer be supported. If you have applications that use it you will
   receive a warning message and your subqueries will no longer return
   duplicates. You may have used this option to obtain better
   performance. Because of subquery processing changes introduced in this
   release, if you really want duplicates, rewrite your query as a join.
   You should see better performance in SQL Server release 11.0 for these
   types of queries.
   
   Only 16 Subqueries on One Side of a union
   
   A new restriction has been imposed. You are now allowed only 16
   subqueries on one side of a union. This should not affect most
   customers because you are only allowed 16 tables within one query.
   This should affect you only if you have more than 16 subqueries and
   some of them have no from clauses.
   
   Subqueries and NULL Results
   
   Prior to SQL Server release 11.0, a correlated expression subquery in
   the set clause of an update returned 0 instead of NULL when there were
   no matching rows. SQL Server release 11.0 correctly returns NULL when
   there are no matching rows, and raises an error. If you have
   applications that depend on the pre-11.0 behavior, you will need to
   rewrite them.
   
   For example, the following trigger tries to update a column that does
   not permit NULL values:

        update t1
                set c1 = (select max(c1)
                from inserted where t1.c2 = inserted.c2)

   
   
   The correct trigger is:

        update t1
                set c1 = (select isnull(max(c1), 0)
                from inserted
                where t1.c2 = inserted.c2)

   
   
   The where clause updates t1.c1 to 0, if the subquery does not return
   any correlation values from the outer table t1.
   
   Memory and Cache
   
   More memory is used for the Sybase kernel and for internal structures
   including the new user log cache. You may need to add more total
   memory to your server to maintain the same performance as your
   previous release.
   
   Also, compiled objects have grown in SQL Server release 11.0 and grew
   considerably in 10, so you may need to enlarge your procedure cache to
   maintain the same performance.
   
   sybsystemprocs Changes
   
   Your sybsystemprocs database will need to be larger than it was for
   release 10.x SQL Server, to make room for the new system stored
   procedures in release 11. Check your SQL Server Installation Guide for
   the correct size, and alter sybsystemprocs before beginning the
   upgrade. If sybsystemprocs is not the correct size, sybinit will fail.
   
   
   During upgrade from System 10 to SQL Server release 11.0, all system
   stored procedures are dropped from sybsystemprocs. If you have
   customized any of your system stored procedures, you will lose them in
   this process unless you rename them before the upgrade. If you do not
   rename them, you will have to re-customize them after the upgrade.
   
   _NEW AND CHANGED ERROR MESSAGES_
   
   Many error messages have been added and some have had their text
   changed to improve their intelligibility. If you rely on the text of
   any error messages within your applications, you should check to be
   sure they have not changed. You can select * from sysmessages on SQL
   Server release 11.0 to see the text changes.
   
   _NEW KEYWORDS_
   
   Many new keywords were added to the SQL Server release 10, most to
   support ANSI 89 features. There are also two new keywords added in SQL
   Server release 11.
     * Any database whose name is a new keyword must have its name
       changed before you upgrade.
     * Any object (table, column, and so on) whose name conflicts with a
       new keyword will yield a syntax error when accessed under a
       release 11 SQL Server. We recommend that you change their names
       before upgrade; you will also have to change any applications that
       reference those objects. For example:
       
     select user from table_x
     will yield a syntax error because "user" is a keyword as of System
     10.
     
   
   
   SQL Server release 11 includes a stored procedure, sp_checkreswords,
   which checks for identifier names that are keywords. Use sybinit to
   run this stored procedure before you start the upgrade itself. Load
   the Sybase files with sybload, begin a sybinit session (as described
   in the SQL Server Installation Guide), and select "Check for reserved
   word conflicts" from the "SQL Server Upgrade" menu. If your existing
   databases use any of the new keywords as identifiers, sybinit displays
   the following message, including the number of conflicts it found:

        _Warning:_ x conflicts with 10.0 reserved
        words were found.
        Sybase suggests that you resolve these conflicts before
        upgrading the SQL Server. Run `sp_checkreswords' on each
        database for more info.

   
   
   You must change database names that conflict with reserved words
   immediately, because the upgrade will fail if database names conflict
   with reserved words.
   
   We also recommend that you change any other object names (tables,
   columns, and so on) that are reserved words before upgrading. Remember
   that you will have to change all applications that reference that
   object also. See the SQL Server Reference Manual for information on
   changing object names.
   
   If you choose not to change other object names, you can use the set
   quoted_identifier option. You must add the following set command to
   all your applications and put quotes around all keywords when you
   issue your T-SQL statements. For example:

        set quoted_identifier on
        select "user" from table_x

     _WARNING!_
     The sp_checkreswords procedure will not check the contents of
     stored procedures. For example, if you have a procedure such as
     the following:

             create procedure x as
                 create table user (a int)

     you will receive a syntax error on running the procedure. Test all
     your stored procedures and triggers to be certain that they use no
     keywords as identifiers.

   
   
   sybinit installs sp_checkreswords automatically. If you subsequently
   want to run reserved word checks, use the following command sequence
   for each database you want to check:

        % isql -Usa -Ppassword -Sservername
        1> use database_name
        2> go
        1> sp_checkreswords
        2> go

   
   
   _NEW OUTPUT FROM SYSTEM STORED PROCEDURES_
   
   Many of the existing system stored procedures provide new and improved
   output. If you are currently running them within your applications,
   you should check to see what they report under SQL Server release
   11.0.
   
   _BACKUP SERVER CHANGES_
   
   The Backup Server has a new feature to deal with tape devices
   unfamiliar to it. If you dump a database to a tape device that is not
   one of the devices mentioned in the System Administration Guide
   Supplement for your platform, and Backup Server cannot determine the
   device type, the dump command fails.
   
   Consequently, when you first dump to a new tape device, you should use
   the with init option to the dump command. It will take some time for
   the Backup Server to read and write to the tape in order to determine
   how to communicate with it. When Backup Server finishes this first
   test, it will write a line to the new tape configuration file, called
   $SYBASE/backup-tape.cfg by default, which is created during upgrade or
   install. You should manage this file as part of the backup strategy
   for your site.
   
   _MIGRATION ISSUES FROM RELEASES PRIOR TO SYSTEM 10 ONLY_
   
   This section applies specifically to you if you are migrating to SQL
   Server release 11.0 from a SQL Server release prior to 10.0. You
   should review both this section and the previous section, ``Migration
   Issues from System 10 and Earlier Releases''. These features were
   added in System 10, and also exist in SQL Server release 11.0.
   
   _DUMPS AND LOADS ARE HANDLED DIFFERENTLY_
   
   Dumps and loads are now handled by a separate process called the
   Backup Server. This runs in addition to SQL Server, so you may need to
   increase memory limits. When upgrading your database, you must install
   a Backup Server through the install procedure (sybinit) or you will be
   unable to do dumps and loads on your System 10 or 11 SQL Server.
   
   On UNIX platforms, additional small processes called sybmultbuf will
   be started by the Backup Server to communicate with your database and
   dump devices.
   
   You should run some tests with dump and load to be sure your scripts
   work and to monitor the additional machine resources needed to do a
   dump or load on your system.
   
   Recovery procedures on system databases have changed with the Backup
   Server and the sybsystemprocs database. Consult your System
   Administration Guide and test your procedures for recovery.
   
   _REMOTE CONNECTIONS ARE AUTOMATIC_
   
   Remote connections are now automatically enabled when SQL Server is
   upgraded or installed. This is necessary so that SQL Server can
   communicate with Backup Server. However, the `allow remote access"
   configuration parameter is now dynamic, so you will no longer have to
   reboot SQL Server in order to change this behavior.
   
   _BACKUP SERVER MUST BE STARTED/STOPPED_
   
   You must now start and stop Backup Server, as well as the SQL Server.
   The install or upgrade program will set up the appropriate scripts to
   start Backup Server.
   
   There is an option in the shutdown command to stop Backup Server:
   
   shutdown [backup_server] [with {wait|nowait}]
   
   You should initiate your own procedures for starting and stopping the
   Backup Server process as appropriate. If you will be using threshold
   procedures to dump transaction to a device, you should always have
   Backup Server running.
   
   _BACKUP SERVER IN THE INTERFACES FILE_
   
   You must now maintain entries in your interfaces files for Backup
   Servers you use. The install or upgrade program will make the
   necessary entry for your local machine, but you may need to determine
   if those entries need to be distributed to other copies of your
   interfaces file.
   
   _THE NULL DEVICE GOES AWAY_
   
   The device /dev/null on UNIX or NL on VMS will no longer be available.
   
   
   sybinit will remove the default entries for these devices from the
   sysdevices table. Check any scripts that do dumps to be sure you are
   not using one of these default devices or any other dump device that
   points to /dev/null or NL. If you do not change these scripts, your
   dumps will fail with the following error:
   
     Backup Server: 4.56.2.1: Device validation error: couldn't obtain
     tape drive characteristics for device /dev/null, error: Invalid
     argument
     
   
   
   _CHANGES TO DUMP SCRIPTS_
   
   Backup facilities have changed as of SQL Server release 10.0. This
   section describes the minimum changes you must make to existing dump
   scripts.
   
   The single feature that can most affect your current use of tapes and
   dump commands is Backup Server's ability to make multiple dumps to a
   single tape. The new dump is placed after the last existing file on
   the current tape volume.
   
   Here are some guidelines for backing up your databases immediately
   after upgrading:
     * If you use new tapes, or tapes without ANSI labels, your pre-10.0
       dump scripts overwrite the entire tape.
     * If you use single-file media (for example, quarter-inch cartridge)
       with ANSI labels, and the expiration dates on the tapes have
       expired, pre-10.0 dump scripts will overwrite the tapes.
     * If the expiration date on a single-file tape has not been reached,
       you will be asked to confirm the overwrite; a positive response
       will overwrite the existing tape; a negative response initiates a
       request for a volume change, and tests are repeated on the new
       volume.
     * If you use multi-file tape media, and do not change your dump
       scripts, the dump will be appended to the existing files on the
       tape.
     * If you want to overwrite existing tapes that have ANSI labels, you
       must append the with init clause to existing dump commands:

             dump database mydb
                     to datadump1
                     with init

     You can also use operating system commands to erase or truncate the
     tape.

   
   
   _LOADING FROM MULTIPLE TAPE DEVICES_
   
   A second connection to SQL Server is now required when loading a
   database to a tape device that spans multiple tapes. This allows you
   to issue the sp_volchanged stored procedure (which notifies Backup
   Server that the tape operator has finished handling a volume, such as
   changing a tape).
   
   If you execute the command load database master to a tape device that
   requires a volume change, you will need a second running SQL Server to
   issue the sp_volchanged stored procedure. This is because SQL Server
   must be in single user mode to load master so you cannot login a
   second time to send the volume change request. For this reason, Sybase
   Technical Support strongly recommends you ensure your tape device has
   enough space to hold the full dump of master before you dump the
   master database, or else that you use a disk device for your master
   backups.
   
   _CHANGES TO RENAMING DATABASES_
   
   If any table in the database referencesor is referenced bya table in
   another database, sp_renamedb cannot rename the database. It produces
   the following error message:

        Database `database_name' has references to other
        databases.  Drop those references and try again.

   
   
   Execute the following query to determine which tables and external
   databases have foreign key constraints on primary key tables in the
   current database:

        select object_name(tableid), db_name(frgndbid)
                from sysreferences
                where frgndbname is not null

   
   
   Execute the following query to determine which tables and external
   databases have primary key constraints for foreign key tables in the
   current database:

        select object_name(reftabid), db_name(pmrydbid)
                from sysreferences
                where pmrydbname is not null

   
   
   Before renaming the database, you must use alter table to drop the
   cross-database constraints in these tables. See sp_renamedb in the SQL
   Server Reference Manual for more information about renaming databases.
   
   
   No dump tran to Device After Non-Logged Text Writes
   
   dump tran to a device is no longer allowed after a non-logged text
   operation.
   
   If you use both on the same database, you must change your text writes
   to be logged or you will be unable to use dump tran as part of your
   backup scheme. For details on changing your text writes, consult the
   SQL Server Reference Manual (writetext command), and either the
   DB-Library Reference Manual (dbwritetext(), and so on) or Open Client
   Client-Library Reference Manuals (ct_send_data(), and so on), as
   needed.
   
   _NO DUMPS FROM WITHIN TRANSACTION_
   
   dump database and dump transaction are no longer allowed within a
   userdefined transaction.
   
   _"RUN" FILE NAME CHANGE_
   
   On UNIX platforms, the default file that startserver looks for to
   start your server is now called RUN_SYBASE rather than RUNSERVER. If
   you have a file called RUNSERVER in your install directory, during
   upgrade sybinit will change its name to RUN_SYBASE. If you use the
   startserver -fRUNSERVER command to start your server, you must change
   it to startserver -fRUN_SYBASE. If you are starting your server
   automatically when you boot your machine, make sure you change your
   system startup file if necessary.
   
   _NEW DATABASE FOR STORED PROCEDURES_
   
   System stored procedures are stored in a new database called
   sybsystemprocs. Find space for it on an existing device or find a new
   physical device to give to sybinit before you upgrade. You may also
   need to modify your dbcc, backup and recovery procedures to include
   this database.
   
   If you have your own system stored procedures, you may want to move
   them to the new database, although it is not required. If you do
   decide to move them, you must add the database name to any tables you
   reference that exist in master. You also need to include a check such
   as:

        if @@trancount > 0
        begin
           print "Can't run this procedure from within a transaction"
          return 1
        end

   
   
   In addition, you should not have any changes to master database tables
   within a transaction. Doing so can cause recovery problems on the
   master database.
   
   _THRESHOLDS_
   
   The threshold manager is a new tool for managing space in segments,
   particularly the log segment, but you need to decide how to use it
   before you upgrade. In particular you should deal with the "last
   chance threshold" on the log before you upgrade. The default behavior
   is to suspend all users when the last chance threshold is reached; if
   you encounter the last chance threshold, all users will just hang
   until some action is taken, instead of getting the 1105 error. Decide
   whether that is the behavior you want and test a last chance threshold
   procedure before you upgrade your production databases.
   
   You must use the sp_thresholdaction stored procedure to define an
   action, such as dump transaction, when the last chance threshold is
   reached. If you do not do so, when you run out of space in your log,
   all users will hang indefinitely. Below is a sample threshold
   procedure that you can tune to suit your installation. This example
   creates a threshold procedure that dumps the transaction log to a tape
   device when the last chance threshold is reached:

        create procedure sp_thresholdaction
                @dbname varchar(30),
                @segmentname varchar(30),
                @space_left int,
                @status int
        as
                dump transaction @dbname to "/dev/rmt4"

   
   
   Remember, even if you have a last chance threshold procedure to dump
   the log, you will have a problem if you have an open transaction
   filling the log. The dump transaction command will not clear the log
   because of the open transaction, and the user who has the open
   transaction will be in suspend state and so will keep the transaction
   open. If this occurs, you can use the lct_admin function documented in
   the "System Functions" section of the SQL Server Reference Manual.
   That document also describes how to set thresholds with
   sp_addthreshold.
   
   You will know that users are hung and have reached the last chance
   threshold because sp_who shows a status of "LOG SUSPEND." In addition,
   SQL Server will write messages to the error log listing how many tasks
   are sleeping because the log is full.
   
   You can change the last chance threshold behavior for a database to
   the old behavior of "abort the transaction with an 1105 error" by
   setting "abort xact when log is full" on with sp_dboption.
   
   Tip for Bulk Copy Users
   
   Bulk Copy handles input records in "batches" which are transactions.
   This is true whether or not the bcp inserts are logged. When loading a
   large number of records, use the bcp ... in ... -b records batch
   option, and set the number of records to some reasonable value. This
   reduces the chance that a bcp command will hold a long transaction and
   block dump transaction from clearing enough log space.
   
   _NEW LOGIN/PASSWORD PROTOCOLS_
   
   New logins and password changes require a minimum 6-byte password.
   
   Existing passwords less than 6 bytes are left alone during upgrade,
   but new password changes will enforce the 6-byte minimum.
   
   _NEW CREATE TABLE PERMISSION_
   
   Beginning with the System 10 SQL Server, create table permission is
   explicitly granted for all users on tempdb. This permission is granted
   at server startup time.
   
   _NEW NUMERIC DATATYPE_
   
   A new numeric datatype is now available in the System 10 SQL Server.
   Unlike float, it is platform independent and exact.
   
   If you give the System 10 SQL Server a constant such as:
   
   5.1
   
   it will be assigned the numeric datatype rather than float.
   
   If you want float, you must represent your constant as:
   
   5.1e0
   
   The following mathematical functions now return a value of type
   numeric rather than float:
     * abs
     * ceiling
     * degrees
     * floor
     * power
     * radians
     * round
     * sign
       
   
   
   If you are running a pre-System 10 front end with a System 10 Server,
   numeric datatype value will be mapped to float on the front end.
   
   _DISPLAY FORMAT CHANGES_
   
   The isql display format for approximate-numeric datatypes now displays
   additional digits of precision. Maximum units of precision for storage
   and display are machine dependent. real values now display up to 9
   digits of precision; float values, up to 17 digits. Values are rounded
   to the last digit on display. Previously, only six places to the right
   of the decimal were displayed.
   
   All values requiring more digits than the maximum are displayed in
   scientific notation, that is, a float value "1e18" is displayed as
   such, rather than 1000000000000000000.000000. Note that the new exact
   numeric types, decimal and numeric, display the entire number.
   
   _ONLINE DATATYPE HIERARCHY_
   
   You can get the datatype hierarchy for SQL Server release 11.0 by
   running the following query:

        select name, hierarchy
        from systypes
        order by hierarchy

name                                            hierarchy
----------------------------                    ---------
floatn                                          1
float                                           2
datetimn                                        3
datetime                                        4
real                                            5
numericn                                        6
numeric                                         7
decimaln                                        8
decimal                                         9
moneyn                                          10
money                                           11
smallmoney                                      12
smalldatetime                                   13
intn                                            14
int                                             15
smallint                                        16
tinyint                                         17
bit                                             18
varchar                                         19
sysname                                         19
nvarchar                                        19
char                                            20
nchar                                           20
varbinary                                       21
timestamp                                       21
binary                                          22
text                                            23
image                                           24

   
   
   (28 rows affected)
   
   In pre-System 10 SQL Servers, money was above float in the hierarchy.
   It is now below both float and numeric. The following query:
   
   select $12*8.9
   
   returns a result of type numeric. In pre-System 10 SQL Servers it
   returned money. Likewise, the following query:
   
   select $12*8.9e0
   
   returns a result of type float. In pre-System 10 SQL Servers it
   returned money.
   
   If you want the pre-System 10 behavior you must use convert:
   
   select $12*convert(money,$12*8.9e0)
   
   _CONVERSION CHANGES_
   
   As of System 10, all conversions to character succeed only if no
   decimal digits are lost. In previous versions of the server, floating
   point to character conversions allowed some truncation without
   warning.
   
   Change in Money Conversion
   
   All conversions to money datatypes round to four places.
   
   When an explicit conversion of one numeric value to another results in
   loss of scale, the results are truncated without warning. For example,
   explicitly converting a float to an integer causes SQL Server to
   truncate all values to the right of the decimal point.
   
   Change in Integer-to-Character Conversion
   
   Conversions from integer to character now return an error if an
   overflow occurs. They formerly returned a buffer of "*".
   
   Change to set arithabort/arithignore
   
   The set arithabort and set arithignore commands have changed behavior
   in some cases. If you are using these set commands, you should test
   and understand the behavior.
   
   _CHANGES IN SUBQUERY PROCESSING_
   
   Changes were made in subquery processing to support full ANSI
   compatibility as well as fix some bugs in the SQL Server. These
   changes may result in different results, as well as performance
   differences. These changes are detailed below.
   
   You should test subqueries used in your applications to understand the
   new behavior.
   
   Changes to Subqueries Using IN/ANY
   
   In pre-System 10 SQL Servers, subqueries using in or any would return
   duplicates if the values being select contained duplicates.
   
   For example, using the pubs database:

        select pub_name
        from publishers
        where pub_id in
                (select pub_id
                from titles)

   
   
   In pre-System 10 SQL Servers this query returned:

        pub_name
        -------------------
        New Age Books
        New Age Books
        New Age Books
        New Age Books
        New Age Books
        Binnet & Hardley
        Binnet & Hardley
        Binnet & Hardley
        Binnet & Hardley
        Binnet & Hardley
        Binnet & Hardley
        Binnet & Hardley
        Algodata Infosystems
        Algodata Infosystems
        Algodata Infosystems
        Algodata Infosystems
        Algodata Infosystems
        Algodata Infosystems

   
   
   As of System 10, this query returns:

        pub_name
        -------------------
        New Age Books
        Binnet & Hardley
        Algodata Infosystems

   
   
   Change in Evaluation of not in
   
   ANSI states that if a subquery returns a NULL, a not in should
   evaluate to UNKNOWN or FALSE. Here is an example, using the pubs
   database:

        select pub_id
        from publishers
        where $100.00 not in
                (select price
                from titles
                where titles.pub_id=publishers.pub_id)

   
   
   In pre-System 10 SQL Servers, this query returns:

        pub_id
        --------
        0736
        0877
        1389

   
   
   In the System 10 SQL Server, this query returns:

        pub_id
        --------
        0736

   
   
   Change in Results of Subquery with or...exists/in/any
   
   The pre-System 10 SQL Server returned the wrong results when an
   exists, in, or any subquery appeared under an or.
   
   Given the following tables and contents:

full_table:     x       y       empty_table:    z
                ----    ----                    -----
                5       2

   
   
   with the following example query:

        select x
        from full_table
        where y in
                (select z
                from empty_table)
        or y = 2

   
   
   In pre-System 10 SQL Servers, this query returns:

        x
        -----
        no rows returned

   
   
   As of System 10, this query returns:

        x
        -----
        5

   
   
   Change in Evaluation of >ALL and <ALL
   
   ANSI states that >ALL and <ALL should be TRUE when a subquery returns
   no rows. Here is an example against the pubs database:

        select title
        from titles
        where advance > all
                (select advance
                from publishers, titles
                where titles.pub_id = publishers.pub_id
                and pub_name="No Such Publisher")

   
   
   In pre-System 10 SQL Servers, this query returns:

        title
        ---------------------------
        no rows returned

   
   
   As of System 10, this query returns all rows in the titles table:

        title
        ---------------------------
        But Is It User Friendly?
        Computer Phobic and Non-Phobic Individuals: Behavior Variations
        Cooking with Computers: Surreptitious Balance Sheets
        Emotional Security: A New Algorithm
        Fifty Years in Buckingham Palace Kitchens
        Is Anger the Enemy?
        Life Without Fear
        Net Etiquette
        Onions, Leeks, and Garlic: Cooking Secrets of the Mediterranean
        Prolonged Data Deprivation: Four Case Studies
        Secrets of Silicon Valley
        Silicon Valley Gastronomic Treats
        Straight Talk About Computers
        Sushi, Anyone?
        The Busy Executive's Database Guide
        The Gourmet Microwave
        The Psychology of Computer Cooking
        You Can Combat Computer Stress!

   
   
   Change in Evaluation of Aggregates with exists
   
   In pre-System 10 SQL Servers, queries that have both aggregates and
   exists subqueries sometimes return the wrong answer. This happens for
   correlated subqueries, where there are duplicates in the subquery.
   Here is an example from the pubs database:

        select count(*)
        from publishers
        where exists
                (select * from titles
                where titles.pub_id = publishers.pub_id)

   
   
   In pre-System 10 SQL Servers, this query returns 18; as of System 10,
   this query returns 3.
   
   Change in Evaluation of in Subqueries with select distinct
   
   Prior to System 10, correlated in subqueries using distinct would
   cause the outer query not to return any rows. Here is an example from
   the pubs database:

        select pub_name
        from publishers
        where pub_id in
                (select distinct pub_id
                from titles
                where titles.pub_id = publishers.pub_id)

   
   
   In pre-System 10 SQL Servers, this query returns:

        pub_name
        ---------------------------
        no rows returned

   
   
   In the System 10 SQL Server, this query returns:

        pub_name
        ---------------------------
        New Age Books
        Binnet & Hardley
        Algodata Infosystems

   
   
   Change In Evaluation of between
   
   ANSI standard states that a between predicate of the form:
   
   expr1 between expr2 and expr3
   
   is equivalent to:
   
   expr1 >= expr2 and expr1 <= expr3
   
   The pre-System SQL SQL Server switches expr2 and expr3 automatically
   if it knows that expr2 > expr3 at compile time. As of System 10, SQL
   Server will no longer do that switch.
   
   For example:

        create table demo (id int)
        insert into demo values (250)
        select id from demo where id between 400 and 200

   
   
   In pre-System 10 SQL Servers, this query returns:

        id
        ---------------------------
        250

   
   
   As of System 10, this query returns:

        id
        ---------------------------
        no rows returned

   
   
   _CHANGE IN COMMENT PROTOCOL_
   
   ANSI comments are started with two or more consecutive hyphens (--)
   and are terminated by a <newline>.
   
   ANSI comments co-exist with the Transact-SQL comments as of the
   System 10 SQL Server.
   
   Certain mathematical expressions will return different results in the
   System 10 SQL Server because of the support of ANSI comments.
   
   For example:
   
   select 5--2
   
   In pre-System 10 Servers, this query returns 7.
   
   As of System 10, this query returns 5 because --2 is considered a
   comment.
   
   You can use the following query to get back the value 7 under the
   System 10 SQL Server:
   
   select 5-(-2)
   
   Alternately, you can construct the query with extra spaces:
   
   select 5 - -2
   
   _CHANGE IN PERMITTED SYNTAX_
   
   In pre-System 10 SQL Servers the following syntax was allowed:

        select * from table1, table1
        where clause

   
   
   ANSI states this syntax is invalid; therefore this query will return a
   syntax error in System 10. The correct syntax in System 10 is:

        select * from table1 t1, table1 t2
        where clause

   
   
   The following update statement also returns a syntax error in System
   10:

        update table1
        set a = a + 1
        from table1
        where b = 5

   
   
   The correct syntax is:

        update table1
        set a = a + 1
        from table1 t1
        where t1.b=5

   
   
   _NO MORE NULL COLUMN HEADINGS_
   
   Previous SQL Server releases allowed NULL column headings in tables
   created using select into. As of System 10, you must provide a column
   heading that is a valid SQL identifier. Check all applications that
   use select into.
   
   Examples of select list items that require column headings are:
     * An aggregate function, such as avg(advance)
     * An arithmetic expression, such as colname * 2
     * String concatenation, such as au_lname + ", " + au_fname
     * A built-in function, such as substring(au_lname,1,5)
     * A constant, such as "Result"
       
   
   
   There are three ways to specify column headings:

        select title_id, avg_advance = avg(advance)
                into #tempdata
                from titles

        select title_id, avg(advance) avg_advance
                into #tempdata
                from titles

        select title_id, avg(advance) as avg_advance
                into #tempdata
                from titles

   
   
   _MORE CONSISTENCY REQUIRED FOR CORRELATION NAMES_
   
   In pre-System 10 releases, statements that specified correlation names
   but did not use them consistently still returned results. The
   following statement now returns errors in accordance with ANSI:

        select title_id
                from titles t
                where titles.type = "trad_cook"

   
   
   The correct query is:

        select title_id
                from titles t
                where t.type = "trad_cook"

   
   
   When a subquery includes this correlation, no error is reported, but
   queries may return different results than a pre-System 10 Server:

        select *
        from mytable
        where columnA =
                (select min(columnB) from mytable m
                where mytable.columnC = 10)

   
   
   In pre-System 10 releases, mytable.columnC in the above subquery would
   have referred to the mytable in the subquery. In System 10,
   mytable.columnC refers to the outer table mytable.
   
   If the query needs to refer to mytable in the subquery, construct it
   as follows:

        select *
        from mytable
        where columnA =
                (select min(columnB) from mytable m
                where m.columnC = 10)

   
   
   _LAST MINUTE CHECKLIST_
   
   The information in this section applies to upgrades from 10.x and
   earlier releases of SQL Server. Perform the tasks listed below to
   avoid known causes of failure.
   
   Read the Documentation
   
   Read the following documents for important information about your
   upgrade:
     * What's New in Sybase SQL Server Release 11.0?
     * Release Bulletin
     * SQL Server installation and configuration guide
       
   
   
   If you are upgrading from a pre-10.x SQL Server, the installation and
   upgrade utility, sybinit, will not be familiar to you. It is explained
   in the SQL Server installation and configuration guide.
   
   Backup All Databases
   
   Back up all databases in case of upgrade failure. A failed upgrade may
   corrupt your databases, so complete backups are essential.
   
   Mirror and Unmirror
   
   In addition to your backups, we recommend mirroring SQL Server if your
   environment permits. Restoring your databases from the mirror is much
   quicker than restoring from backups. Use the following syntax:

        disk mirror
        name = "device_name", mirror = "physical_name"

   
   
   Then, unmirror all Sybase mirrors before upgrading. Use the following
   syntax:

        disk unmirror
        name = "device_name", mode = remove

   
   
   See the SQL Server System Administration Guide for instructions on
   mirroring. See the SQL Server Reference Manual for information about
   the disk mirror, unmirror, and remirror commands.
   
   Increase SQL Server Memory
   
   Various changes in SQL Server 11.0 have increased its memory
   requirements. These changes include:
     * New user log cache
     * Larger dataserver binary
     * Previously existing structures which are now larger (such as
       locks)
       
   
   
   We recommend the following approach to adjusting your memory for
   upgrade:
    1. Before making any upgrade-related changes, look at the error log
       to get a profile of the memory usage on your production SQL
       Server. Look for messages that give the buffer cache, procedure
       cache, and procedure header sizes. Record these values to use
       after the upgrade.
    2. Next, to ensure that you have enough memory to successfully
       complete the upgrade to 11.0, use sp_configure to change the
       following configuration parameters to their default values:

      Configuration Parameter    Default Value
      user connections           25
      locks                      500
      open objects               5000
      memory                     at least 7680 (15 MB)

     Use commands like the following to change the parameters:

             sp_configure "user connections", 25
    3. After making these changes to maximize available memory, check the
       error log again and record how much cache you have now. You will
       use this information to configure your cache after the upgrade.
    4. If you are upgrading from a pre-10.x SQL Server and your user
       databases have a lot of stored procedures, triggers, views, or
       rules, you may need to increase the stack size because remapping
       stored procedures during upgrade requires extra stack space. Do
       the following:
          + Record your current stack size.
          + Increase stack size. For example, to increase stack size to
            200K, enter:
            
     sp_configure 'stack size', 204800
    5. After the upgrade, perform the tasks described in ``Important
       Steps After the Upgrade'' below to return your memory to its
       normal configuration.
       
   
   
   See the "How SQL Server Uses Memory" in the SQL Server Troubleshooting
   Guide for a discussion of tools for assessing memory usage. See the
   System Administration Guide and the Performance and Tuning Guide for
   more information about SQL Server memory usage.
   
   _ADDITIONAL PRE-UPGRADE TASKS_
   
   Perform the following additional tasks before beginning your upgrade:
     * If you are upgrading from a pre-10.x SQL Server, check that you
       have extra space in all your databases for stored procedures and
       other objects. Compiled objects, such as stored procedures,
       triggers and rules, will be remapped and will take up more disk
       space than previously.
     * If you are upgrading from release 10.x, disable replication. If
       replication is enabled and your log has not been cleared, the
       upgrade will fail. The Release Bulletin details the steps
       necessary to disable replication. See also the Replication Server
       Commands Reference.
     * Run the following dbccs on all databases: checkdb, checkalloc, and
       checkcatalog. See the SQL Server Reference Manual for instructions
       on running dbccs.
     * Truncate transaction logs on all databases before running sybinit.
       This is a precaution against failure.
     * Verify that you have enough space for system tables. If you are
       upgrading from SQL Server 10.x, slightly more disk space is
       required for the creation of new system tables. Use the "Test
       upgrade eligibility" option in sybinit to check that you have
       enough disk space. Follow the instructions in the SQL Server
       installation and configuration guide.
     * Turn off all options on your all databases, with the exception of
       tempdb, using sp_dboption. The upgrade requires that you set
       select into/bulk copy to true for tempdb. To turn off an option,
       use this syntax:
       
   
   
   sp_dboption database_name, "option", false
   
   To set select into/bulk copy for tempdb, use the following command:
   
   sp_dboption tempdb, "select into/bulkcopy", true
   
   See the SQL Server Reference Manual for information on sp_dboption.
     * If you are upgrading from a pre-10.x SQL Server, check that you
       have enough devices configured to create the sybsystemprocs
       device. For example, if you have 10 devices configured, and all of
       them are already in use, increase devices by 1 as follows:
       
   
   
   sp_configure devices, 11
   
   See the SQL Server Reference Manual for information on sp_configure.
     * Be sure SQL Server is running and all users are off before
       beginning the upgrade.
       
   
   
   _IMPORTANT STEPS AFTER THE UPGRADE_
   
   After performing the upgrade according to instructions in the SQL
   Server Installation and Configuration Guide, be sure to perform all
   the tasks in the following sections.
   
   Reconfigure Memory for Normal Usage
   
   Follow these steps to return your memory parameters to their normal
   configurations. Use the record of memory usage you created in the
   section "Increase SQL Server Memory".
    1. Check the error log to see how much memory is left in cache. By
       subtracting this number from the amount you recorded in Step 3 of
       ``Increase SQL Server Memory'', you can determine how much
       additional memory is required by the 11.x SQL Server.
    2. Using the baseline information you obtained in Step 1 of
       ``Increase SQL Server Memory'', reconfigure the memory parameter
       as needed to support your usual number of user connections, open
       objects, and locks.
    3. Use the sp_configure command to reconfigure the resources you
       changed in Step 2 back to the usual values needed for your
       production environment.
    4. Reset your stack size to the size you recorded in Step 4 of
       ``Increase SQL Server Memory''.
       
   
   
   Perform Backup Tasks
   
   Perform the following tasks to back up your databases:
     * Install Backup Server. If you upgraded from 10.x and you want to
       keep the same port number for your 11.0 Backup Server, shut down
       the 10.x Backup Server before you start the 11.0 Backup Server.
       See the SQL Server installation and configuration guide for
       complete instructions on installing Backup Server.
     * After you install Backup Server, verify that SQL Server's default
       Backup Server name matches the name you gave Backup Server when
       you installed it. If you performed an upgrade from a pre-10.0 SQL
       Server, you must be especially careful that SQL Server knows the
       name you gave Backup Server. SQL Server needs the name in order to
       perform dumps and loads. The upgrade program automatically sets
       SQL Server's default Backup Server to SYB_BACKUP. However, you may
       have given your Backup Server a different name when you installed
       it. If so, use sybinit to reconfigure SQL Server and enter the
       correct Backup Server name as the default. Follow the instructions
       in the SQL Server Installation and Configuration Guide for
       installing a Backup Server and configuring an existing SQL Server.
     * Backup all databases immediately so that your dumps are at the
       right release level. If necessary, you can load 10.x dumps to a
       release 11.0 SQL Server, but you cannot load 4.9.x or 4.2 dumps.
       
   
     _________________________________________________________________
   
   
   
   Disclaimer: No express or implied warranty is made by Sybase or its
   subsidiaries with regard to any recommendations or information
   presented in SYBASE Technical News. Sybase and its subsidiaries hereby
   disclaim any and all such warranties, including without limitation any
   implied warranty of merchantability of fitness for a particular
   purpose. In no event will Sybase or its subsidiaries be liable for
   damages of any kind resulting from use of any recommendations or
   information provided herein, including without limitation loss of
   profits, loss or inaccuracy of data, or indirect special incidental or
   consequential damages. Each user assumes the entire risk of acting on
   or utilizing any item herein including the entire cost of all
   necessary remedies.
   
   _STAFF_
   
   Principal Editor: Leigh Ann Hussey
   
   Contributing Writers:
   Kathy Saunders, Cris Gutierrez
   
   Send comments and suggestions to:
   
   SYBASE Technical News
   6475 Christie Avenue
   Emeryville, CA 94608
   
   or send mail to technews@sybase.com
   
   Copyright 1996  Sybase, Inc. All Rights Reserved.
                                   Q10.3.3

           Sybase Technical News Volume 5 Number 2, February 1996

This issue of Sybase Technical News contains new information about your
Sybase software. This newsletter is intended for Sybase customers with
support contracts. You may distribute it within a supported site; however,
it contains proprietary information and may not be distributed publicly. All
issues of Sybase Technical News and the troubleshooting guides are included
on the AnswerBase CD, SupportPlus Online Services Web pages, and the Sybase
PrivateLine forum of CompuServe. Send comments to technews@sybase.com.

To receive this document by regular email, send name, full internet address
and customer ID to technews@sybase.com.

In this Issue

Tech Support News/Features

   * 1996 Technical Support North American Holiday Schedule
   * Change in Publication of Certification/Rollup Information

SQL Server 11.0

   * SQL Server 11.0 Network Affinity Feature
   * Identity Enhancement
   * Creating and Configuring I/O Buffer Pools with sp_poolconfig
   * Large I/O for Log and User Log Cache
   * How to Use syslogshold
   * New Backup Server Auto Tape Configuration Feature
   * Dump to Remote Backup Server
   * Tape Stacker Support

SQL Server General

   * Problems Installing on OpenVMS 6.2
   * Problems Parsing Column Name
   * Indexes and Dirty Reads

Connectivity / Tools / PC

   * Cursor Integrity
   * Replication Server NLM Compatibility

1996 Technical Support North American Holiday Schedule

Sybase Technical Support is open on all holidays and provides full service
on many. During the limited-service holidays shown below, Technical Support
will provide the following coverage:

   * SupportPlus Preferred and Advantage customers may log all cases; we
     will work on priority 1 and 2 cases over the holiday.
   * 24x7 and 24x5 Support customers may log priority 1 cases; we will work
     on these over the holiday.
   * SupportPlus Standard, Desk Top, and Regular Support customers may
     purchase Extended-hour Technical Support for coverage over the holiday.
         Sybase Technical Support
        limited-service holidays 
             U.S. customers

          Holiday          Date
      Memorial Day     May 27
      Independence Day July 4
      Labor Day        September 2
      Thanksgiving     November 28
      Christmas        December 25

      Sybase Technical Support
 limited-service holidays  Canadian
             customers

        Holiday            Date
 Canada Day            July 1
 Labour Day            September 2
 Canadian Thanksgiving October 14
 Christmas Day         December 25
 Boxing Day            December 26

If you have questions, please contact Technical Support.

Change in Publication of Certification/Rollup Information

As of this issue, Sybase Technical News will no longer be printing
Certification or Rollup information, in the interest of providing customers
with more up-to-date information than is possible given its quarterly
release schedule. Instead, you can get the information on the CompuServe
Sybase PrivateLine forum and through the SupportPlus Online Services web
pages.

To reach the SupportPlus Online Services web pages, follow these steps:

  1. Go to the following URL using Netscape Navigator or any other browser
     that supports Secure Sockets Layer (SSL):

     https://www-es1.sybase.com/plus/

     ------------------------------------------------------------------
     Note
     If you are behind a firewall, your proxy server must also support
     SSL.
     ------------------------------------------------------------------

  1. When prompted for your user ID and password, enter them.

  2. If you don't have an ID and password for SupportPlus Online Services,
     go to the following URL and follow directions there to register
     yourself for SupportPlus Online Services:

     https://www-es1.sybase.com/registergw.html

     You must supply your customer number and email address.

SQL Server 11.0 Network Affinity Feature

SQL Server release 11.0 is informally known as "the performance release"
because many new features have been added to increase performance.

One such new feature is the support for multiple network engines (MNE),
which means that SQL Server processes (referred to as engines) share the
nework I/O load, with the engine that has the fewest connections taking up
the job of network I/O regardless of other tasks that engine may be
performing.

It has been possible, since release 4.8 of SQL Server, to configure for
multiple engines; as of 11.0 , configuring for multiple engines buys you
more in performance, so it may be to your advantage to configure more
engines than you have in the past.

Understanding Network Affinity

Network affinity is the term used to describe how a task is assigned to the
engine that performs its network I/O. For example, say Task1 (an isql
connection or application) logs into SQL Server through Engine 0, but Engine
2 has the fewest actual connections at that moment; Engine 0 assigns Task1
to Engine 2 for its network affinity. Task1 can use any other engine for
subsequent disk I/O, but when it needs to do network I/O, Task1 must do it
from Engine 2, the one for which it has network affinity.

One process of moving network I/O from one engine to another is called
"network affinity migration," and is supported on these SMP systems:

   * Digital UNIX
   * RS/6000 AIX
   * Sun Solaris
   * HP-UX

Other platforms, such as Windows NT, also support handling network I/O
through multiple engines, but the actual method is different.

Configuring Multiple Engines

MNE is automatic if you have configured your SQL Server for multiple
engines. Having MNE can improve performance, but you must determine whether
or not to implement multiple engines based on an analysis of your own
system's performance.

Before you add engines, monitor the system to determine baseline
performance; then add engines and measure subsequent performance to
determine the impact of MNE on system performance. You can use SQL Monitor
or sp_sysmon to measure server performance.

Monitor these areas and take them into account when you configure engines:

   * CPU usage  If CPU usage on all engines is over 85 percent, adding an
     engine can be beneficial
   * User connections  If the SMP system supports network affinity
     migration, then each engine will handle the network I/O for its
     connections, which allows the potential for more user connections
   * Memory  Adding engines uses memory; if Error 701 appears frequently in
     the error log or at the client terminal, you may wish to increase the
     amount of available memory

The number of online engines is configurable with sp_configure. The syntax
is:

sp_configure "max online engines", value

You must reboot SQL Server after any changes to the max online engines
value, as that value is not dynamically configurable.

Considerations in Configuring Engines

Consider these guidelines when you configure for multiple engines:

   * Start with a few engines and add additional ones when the active CPUs
     are over 85 percent utilized
   * Add engines if the current performance is not adequate for an
     application and there are enough CPUs on the machine
   * Decrease engines if a hardware failure disables CPUs on the machine
   * Configure only as many engines as you have usable CPUs; configuring for
     more engines may slow performance
   * If your machine also hosts non-SQL Server processes, or there is a lot
     of processing by a client, then one engine per CPU may be excessive.

Multiple Network Engines and User Connections

In releases earlier than 11.0, user connections were limited by the
operating system, file descriptors (in the case of UNIX; other platforms
differ), and the SQL Server configuration parameter @@max_connections. With
the introduction of MNE in release 11.0, however, because each additional
engine allows a number of user connections, SQL Server allows you to
increase the number of user connections based on the number of additional
engines in your system.

@@max_connections now represents the maximum number of file descriptors
allowed by the operating system for your process, minus a few file
descriptors used by SQL Server itself. For example, if SQL Server is
configured for one engine and the value of @@max_connections is 1019, adding
a second engine increases that value to 2039, assuming that there is only
one master network listener.

Given the increase in @@max_connections, you can now configure for more user
connections using sp_configure:

sp_configure "number of user connections", value

     ------------------------------------------------------------------
     Note
     Conversely, if you decrease the number of engines on your system,
     you may also have to decrease the number of user connections
     accordingly.
     ------------------------------------------------------------------

Remember that increasing or decreasing user connections is not dynamic; you
must reboot SQL Server after any such changes.

Identity Enhancement

The IDENTITY column feature allows you to create a column with
system-generated values that uniquely identify each row in a table. Release
11.0 provides the following enhancements to IDENTITY columns.

Identity Enhancement for "Dirty Reads"

For tables with no unique indexes, the identity in nonunique index database
option instructs SQL Server to add an IDENTITY column to any index created
in the database, in order to make it unique.

Unique indexes are required for isolation level 0 ("dirty") reads and
updatable cursors, so if you use these in your installation, this database
option may be useful to you. Implement it with the following commands:

sp_dboption "identity in nonunique index", true

use database database_name

checkpoint

     ------------------------------------------------------------------
     Note
     You may have optimization problems after setting this database
     option. Because SQL Server treats all indexes as unique with this
     option set, the formulas for calculating the usefulness of a
     particular index may no longer be accurate, causing the optimizer
     to pick an index that may not be the fastest.
     ------------------------------------------------------------------

Identity Enhancement to Reduce Spinlock Contention

The identity grab size configuration parameter allows each SQL Server
process in a multiprocessor environment to reserve a specified number of
IDENTITY column values when you add rows to tables that contain IDENTITY
columns.

Setting this configuration parameter can increase performance on
multi-engine servers, because it reduces spinlock contention on the memory
structure that contains the next identity value. Use sp_configure to set it,
as follows:

sp_configure "identity grab size", n

where n is the number of reserved values.

For example, if you set the identity grab size to 20, when a user does an
insert to a table with an IDENTITY column, SQL Server reserves a block of 20
values for that user, and the next 20 rows that user inserts will have
sequential identity values. If a second user starts inserting rows while the
first user is still doing inserts, SQL Server reserves another block of 20
values for the second user.

     ------------------------------------------------------------------
     Note
     If you set the identity grab size to a large value, and a user
     logs out before using all the values SQL Server has reserved for
     that user, those values are not released and thus are lost,
     causing a large gap in the identity values. Additionally, if the
     value for sp_configure "identity burning set factor" is
     inappropriate, server failure can cause large gaps. Large gaps
     mean that your identity values may max out sooner. Take this into
     account when you set these configuration values; see "The IDENTITY
     Column" in Sybase Technical News, volume 4 number 1, for more
     information.
     ------------------------------------------------------------------

Enhancement to Auto Identity Database Option

In SQL Server 10.0 we introduced the auto identity database option, which
signalled SQL Server to add an IDENTITY column to any new table it created.
You can set auto identity for the session using the command set auto
identity on, or for all sessions in the database with this command:

sp_dboption database_name, "auto identity", true

use database database_name

checkpoint

Setting auto identity "on" means that when a user creates a table in the
affected database with no primary key, unique constraint, or IDENTITY
column, SQL Server automatically generates an IDENTITY column for the table.

In release 10.0, the size of the IDENTITY column so generated was always 10
digits. As of release 11.0, you can use the size of auto identity
configuration parameter to set the precision of the IDENTITY columns that
SQL Server generates. The command syntax is:

sp_configure "size of auto identity", n

where n is the number of digits you want your automatically- generated
IDENTITY columns to have. For example, if you want the automatic IDENTITY
columns to have 15 digits, the command is:

sp_configure "size of auto identity", 15

The default setting is still 10 digits.

More Information

For more information about the enhancements to IDENTITY columns, see
"IDENTITY Columns" in the SQL Server Reference Manual.

Creating and Configuring I/O Buffer Pools with sp_poolconfig

As of release 11.0, SQL Server includes a stored procedure, sp_poolconfig,
to help you take advantage of the new option to use large I/Os, by creating
and manipulating I/O buffer pools in cache.

Large I/Os have been implemented for two reasons:

   * Large I/Os minimize physical I/O by reducing the number of times SQL
     Server must go to disk to obtain a data page
   * Large I/Os give you better throughput by grabbing up to eight 2K pages
     in one operation rather than one page at a time.

Large I/Os are useful for decision support systems (DSS) applications such
as:

   * Running regular maintenance tasks such as bcp
   * Batch updates to large numbers of rows
   * Joins on large tables
   * Loading data into a table using insert statements
   * Scans of large tables and range scans of large tables

Large I/Os are also useful when your log disk is a bottleneck; for more
information on sizing the log I/O, see "Large I/Os for Log" on page 12.

About Buffer Pools

An I/O buffer is a memory structure that tracks a page or set of pages in
buffer cache.

One buffer pool can contain as many I/O buffers as its size permits; you
specify what the I/O size is for any given buffer pool. Valid I/O sizes are
2K, 4K, 8K, 16K. A given buffer pool can only accomodate the one size of I/O
with which it was configured, so we speak of a 2K buffer pool (which
consists of 2K I/O buffers) or a 4K buffer pool (which consists of 4K I/O
buffers), and so forth.

Every named cache contains at least one 2K buffer pool (which cannot be
deleted), and can contain at most four buffer pools (one of each: 2K, 4K, 8K
and 16K).

Using sp_poolconfig

sp_poolconfig is a dynamic command, meaning that when you invoke it, your
changes occur immediately without your having to reboot SQL Server. The
syntax for sp_poolconfig is as follows:

sp_poolconfig cache_name, "pool_size [P|K|M|G]",
"first_io_sizeK"[, "second_io_sizeK"]

Here is an explanation of each variable:

   * cache_name is the named cache for which you want to create the buffer
     pool. Note that if you don't explicitly create a buffer pool for a
     named cache, SQL Server automatically creates the default 2K buffer
     pool for it, and allocates all memory configured for that cache to the
     default buffer pool. For more information on named caches, see the SQL
     Server System Administration Guide and the SQL Server Performance and
     Tuning Guide.
   * pool_size is the size of the pool  you specify whether that size is in
     pages (P), kilobytes (K), megabytes (M), or gigabytes (G).
   * first_io_size is the I/O buffer size. Valid values are 2K, 4K, 8K and
     16K (on Stratus, 4K, 8K, 16K, and 32K).
   * second_io_size is only used if you are moving memory from one buffer
     pool to another. If the first buffer pool is smaller than the size you
     wish to make it, SQL Server takes memory from the second buffer pool
     and gives it to the first. If the first buffer pool is larger than the
     size you wish to make it, SQL Server takes memory from the first buffer
     pool and gives it to the second.

The following command creates a buffer pool for the named cache pub_cache,
which is 4MB in size and whose I/O buffers are 4K each:

sp_poolconfig pub_cache, "4M", "4K"

Here is an example command to modify that same buffer pool. If the 4K pool
is smaller than 2MB, SQL Server takes 2MB of memory from the 16K buffer pool
and applies it to the 4K buffer pool; if the 4K pool is already larger than
2MB, SQL Server takes the extra memory and applies it to the 16K pool

sp_poolconfig pub_cache, "2M", "4K", "16K"

     ------------------------------------------------------------------
     Note
     If SQL Server detects that it cannot move memory from one buffer
     pool to another, it will return message 18145:
     Less memory moved than requested in cache '%1!'. Requested size =
     %2! Kb: from pool = %3!, to pool = %4!, actual memory moved = %5!
     Kb.
     ------------------------------------------------------------------

You can use sp_poolconfig to delete a buffer pool as well, by setting the
I/O size to 0. Here is an example command to delete the buffer pool we
modified above:

sp_poolconfig pub_cache, "0", "4K"

The memory removed is reassigned to the default (2K) pool. You can also
delete a buffer pool by deleting its entry in the configuration file.

Why Modify Buffer Pools?

There are two reasons why you might need to modify a buffer pool:

   * The pool is too small. This can cause queries to wait or to use
     different buffer pool sizes, or may result in more physical I/O due to
     pages being flushed from the cache
   * The pool is too large, using up memory that could be better applied
     elsewhere

You can determine if either of these is true by monitoring your system's
performance with sp_sysmon (see page 16 for details).

Tuning Buffer Pools for Type of Processing

If you have an application that requires online transaction processing
(OLTP) during the day and DSS processing during the evening, you must
configure your buffer pools accordingly. Sybase recommends a 16K buffer pool
for the DSS jobs, and a 2K buffer pool for the OLTP jobs.

Create two separate configuration files with the correct buffer pool
configuration and load them into SQL Server as necessary (note that you
cannot delete the 2K buffer pool, but you can allot it less memory than the
16K pool for the DSS configuration).

For more information on using the configuration file, see the System
Administration Guide for release 11.0.

Restrictions

There are a few restrictions on creating, modifying and deleting buffer
pools:

   * The sum of all buffers in the pool must be less than the total memory
     size of the cache.
   * You cannot delete the 2K buffer pool in any cache.
   * Pool sizes must be in increments of 2K, 4K, 8K or 16K.
   * Unless you specify otherwise, new buffer pools use memory from the
     default source 2K pool. To create a new buffer pool using memory from a
     pool other than the 2K pool, specify that pool in the second_io_sizeK
     position, as in the following example:

     sp_poolconfig pub_cache, "4M", "4K", "8K"

     In the example, SQL Server creates the 4K pool by taking 4MB of memory
     from the 8K pool.

   * The minimum pool size is 512K.
   * The maximum I/O buffer size is 16K.

More Information

For more information on using sp_poolconfig, see the SQL Server Reference
Manual and the SQL Server Performance and Tuning Guide.

Large I/O for Log and User Log Cache

To relieve the transaction log I/O bottleneck and improve transaction
throughput, SQL Server 11.0 now allows a configurable log I/O buffer size,
ranging from 2K to 16K. If there is a 4K buffer pool, SQL Server will use it
by default for the log; if you have not configured a 4K buffer pool,
however, SQL Server will use a 2K pool.

There are two ways in which SQL Server recognizes the buffer pool size:

   * At boot time, SQL Server reads the log I/O buffer size value from the
     sysattributes table. If a buffer pool at the specific log I/O size is
     not available in the named cache to which the log is bound, SQL Server
     displays error 18128 and does not change the log size:

     Unable to change the log I/O size. The memory pool
     for the specified log I/O size does not exist.

     When this error occurs, SQL Server takes 2K to be the default.

   * At run time, you can set or alter the log I/O buffer size with the
     system procedure sp_logiosize, which updates sysattributes. The change
     is dynamic, taking effect without your having to reboot SQL Server.

How to Configure Log I/O Buffer Size

The syntax for sp_logiosize is as follows:

sp_logiosize {"default" | "value"}

If you execute sp_logiosize "default", the log I/O size will be set to the
4K default. Otherwise, you can specify a value. The following command, for
example, sets the log I/O size to 8K:

sp_logiosize "8"

     ------------------------------------------------------------------
     Note
     Log I/O size is set at the database level. You must run
     sp_logiosize in the database for which you wish to set the log I/O
     size.
     ------------------------------------------------------------------

You can run sp_logiosize without parameters to see the log I/O size of the
database in which you run it, as follows:

sp_logiosize

The transaction log for database 'master' will use
I/O size of 2 Kbytes.

(return status = 0)

Use sp_poolconfig, as shown in ``Creating and Configuring I/O Buffer Pools
with sp_poolconfig'' on page 9, to create a buffer pool for the cache to
which the log is bound.

Considerations in Configuring Log I/O Buffer Size

You should take these factors into consideration when setting your log I/O
size:

   * The processing type of your application
   * If your application is DSS, a large log I/O size is more useful
   * If your application is OLTP, a smaller log I/O size is preferable
   * I/O characteristics of your disk device, such as track size, access
     time, and disk caching
   * The power, speed and number of processes of your CPU

     ------------------------------------------------------------------
     Note
     Setting the log I/O value too high can affect SQL Server
     performance. SQL Server does not wait for pages to fill up before
     flushing log pages to disk; when the log page finally does fill,
     it will have to be written again. Writing the same page multiple
     times slows the server down.
     ------------------------------------------------------------------

We encourage you to experiment with different log I/O sizes and appropriate
buffer pools in a development environment before you set the value in a
production environment.

User Log Cache

In releases prior to 11.0, there could be significant contention as each
database process waited to aquire a spinlock to protect the log pages in
memory and then waited to write to the last page of the log. As of release
11.0, SQL Server has a user log cache (ULC) layer to accumulate log records.
It flushes those records to log pages under the following circumstances:

   * At the end of a transaction (commit / rollback)
   * On a write to a different database
   * When the ULC is running out of space
   * When a checkpoint occurs

On a single-processor system, logging with the ULC is at least as fast as it
was in release 10.x; and on multiprocessor systems you'll see significant
reductions in contention.

Configuring User Log Cache

User log cache size is a parameter configurable with sp_configure. The
syntax is:

sp_configure "user log cache size", value_in_bytes

The value you configure is the same for every user's log cache, and ranges
from 2048 (2K) to 2,147,483,647. It must be at least 2048, but does not have
to be a multiple of 2K.

     ------------------------------------------------------------------
     Note
     For every user connection you have configured, SQL Server takes at
     least the 2K minimum from the available memory in the default data
     cache, and more if you have specified more. Configure your total
     memory accordingly, to ensure that there is enough memory to
     accommodate this.
     ------------------------------------------------------------------

Here are some guidelines for setting ULC size:

   * Do not set ULC size larger than your largest transaction, because it
     uses memory that might be better used elsewhere
   * Do not configure for long-running transactions
   * Do not set the size too small, or SQL Server will flush the ULC to the
     log more frequently, increasing spinlock contention on the log pages

ULC Spinlock Ratio

Another configurable parameter, user log cache spinlock ratio, specifies the
number of user log caches per ULC spinlock. Its syntax is:

sp_configure "user log cache spinlock ratio", value

We strongly recommend you leave it at the default, which is 20.

     ------------------------------------------------------------------
     Note
     If max on-line engines is set to one, the value for user log cache
     spinlock ratio is ignored and there is only one spinlock.
     ------------------------------------------------------------------

More Information

For more information on the user log cache, refer to the SQL Server System
Administration Guide. For more information on tuning the ULC size, and on
how to use sp_sysmon to monitor ULC usage and log I/O, refer to the SQL
Server Performance and Tuning Guide.

How to Use syslogshold

As of SQL Server release 11.0, you can use the new syslogshold table to help
you identify what process has the oldest active transaction and how long
that transaction has been open. An active transaction is one which has
written at least one record to the database's transaction log and has not
yet been committed.

Here is an example of output from a query to syslogshold:

dbid   reserved    spid   page        xactid         masterxactid

       starttime

        name

------ ----------- ------ ----------- -------------- ------------

        --------------------------

        ---------------------------------------------------------

    5           0      1        411 0x0000019b000d 0x000000000000

       Feb 22 1996  6:39PM

       $user_transaction

Any given database will have zero, one or two rows in syslogshold:

   * zero rows: no active transaction or replication
   * one row: an active transaction or replication
   * two rows: both an active transaction and replication

The columns in syslogshold are as follows:

   * dbid  the ID of the database where the oldest transaction is open
   * reserved  a column that isn't actually used in this version
   * spid  the ID of the process holding the open transaction
   * page  the starting page of the active portion of syslogs
   * xactid  the ID of the open transaction
   * starttime  the time at which the transaction was opened
   * name  the transaction name

If you are using Secure Server, there is an additional column in syslogshold
for sensitivity.

     ------------------------------------------------------------------
     Note
     syslogshold is dynamically created when it is queried, so its
     output is only accurate as of the instant it is collected.
     ------------------------------------------------------------------

Using syslogshold to Identify Process Blocking Log Truncation

To find the spid and transaction name of a process blocking truncation of
the log, execute the following query in each of your databases:

select H.spid, H.name

from master..syslogshold H, sysindexes I

where H.dbid = db_id() and I.id = 8

and H.page = I.first and H.spid !=0

This query tells SQL Server to look for the row in the current database
where the first page of the log for this transaction is equal to the current
first page of syslogshold.

The use of H.spid != 0 indicates to SQL Server that the row is for the
oldest active transaction, rather than for a Replication Server truncation
point.

Output from this query will look something like this example:

spid   name

------ --------------------------------------------

     7 $user_transaction

If there is a row in syslogshold meeting the query constraints, the presence
of the oldest active transaction is on the oldest page of the log, and as a
result is limiting the ability of SQL Server to truncate the log.

It may be appropriate to kill the offending process, if either of the
following is true:

   * You believe that you have only short transactions
   * Your transaction log is very large

To kill the process in the above example, you would execute the command:

kill 7

     ------------------------------------------------------------------
     Note
     You cannot execute the query select @spid=H.spid and then kill
     @spid; this syntax is not allowed.
     ------------------------------------------------------------------

Using syslogshold to Identify User and Application

To find the application that owns the oldest active transaction in a given
database, use the following command:

select P.hostname, P.hostprocess, P.program_name,

H.name, H.starttime

from master..sysprocesses P, master..syslogshold H

where P.spid = H.spid and H.dbid = db_id()

and H.spid != 0

Here is an example of output from the above command:

hostname   hostprocess program_name     name

        starttime

---------- ----------- ---------------- -------------------------
------------------

        --------------------------

fnord      12081       isql             $user_transaction

         Feb 22 1996  6:53PM

New Backup Server Auto Tape Configuration Feature

Backup Server release 11.0 includes a new feature which allows for automatic
configuration of any unknown tape device.

Historically, SQL Server (and Backup Server when it was first introduced)
was very device-dependent; for any given platform, a 4mm drive from one
vendor would not work in the same way as a 4mm drive made by a different
vendor. Backup Server supported a limited number of tape drives for which it
knew the characteristics.

As of release 11.0, however, when you execute a dump command to a device for
which Backup Server doesn't already know the characteristics, Backup Server
searches the tape device configuration file. If it finds no entry there for
that device, one of two things happens next:

   * If you have not specified dump ... with init, Backup Server returns the
     following message and aborts the dump:

     Device not found in configuration file.  INIT
     needs to be specified to configure the device.

     In this case, you should re-execute the dump command with the init
     qualifier:

     dump [database | transaction] to device_name with init

   * If you have specified dump ... with init, Backup Server returns this
     informational message:

     Device %s has not been configured by the Backup
     Server; configuration may take additional time.

When you execute dump ... with init to an unknown tape device, Backup Server
runs some tests against the device to determine its characteristics, stores
the information in the tape device configuration file for future reference,
and proceeds with the dump.

Tape Device Configuration File

The tape device configuration file is a text file, $SYBASE/backup_tape.cfg
by default. It is created the first time Backup Server writes an entry to
it. Entries in this file are of the form:

hostname dev_name filemarks append_strategy
blocksize ostype fm_type cls_rdtpmk

Here is what the fields mean:

   * hostname  the name of the machine where the device is located
   * dev_name  the name of the tape device, for example /dev/nrst1
   * filemarks  the number of filemarks written before the dump file is
     closed
   * append_strategy  an integer representing different strategies used to
     append a new dump to the volume set, as follows:
   * 0  device supports only one dump file per tape
   * 1  back skip file strategy. This is used when the tape device supports
     over-writing tape marks, and is the most efficient way to append. The
     tape is positioned to append between the last two filemarks on the
     volume set.
   * 2  seek to end strategy. The volume is first read to determine that it
     is the last volume in the volume set, then the device is rewound and
     the "seek to end of written data" system call is used for append
     positioning.
   * 3  skip n filemarks strategy. This strategy skips forward the exact
     number of filemarks currently written on the tape.
   * 4  skip n+1 filemarks strategy. This strategy skips forward one more
     than the number of filemarks currently on the tape. This strategy and
     the skip n filemarks strategy both use the "forward skip file" system
     call; the number of filemarks on the media is determined when the
     system reads one volume looking for the end of the volume set.
   * blocksize  the maximum block size of the device in bytes, ranging from
     2K to 64K
   * ostype  code used by the operating system to determine device type
   * fm_type  the type of filemarks for which the device is configured
   * cls_rdtpmk  an integer representing the read tape mark behavior, as
     follows:
   * 0  BSD UNIX
   * 1  SVR4 UNIX BSD tape mark behavior is to position past the tape mark
     after reading it. SVR4 does not reposition the tape, so a close must be
     inserted in order to position past the tape mark.

Here is an example entry:

svribmseng2 /dev/rmt5.1 1 3 65536 7 1 0

In normal circumstances, only Backup Server ever writes to this file.
However, if you happen to change devices, for example in the case of
hardware failure, and you install a tape device with different
characteristics but the same device name as the original, the next time you
try to dump, Backup Server will return this error and terminate the dump
command:

Device %1!: the operating system device type is
different than what is in the configuration file
%2!.  Please remove entry for this device in the
configuration file and reconfigure the device by
issuing a DUMP with the INIT qualifier.

In this case, you can use any text editor to delete the entry for that
device, and run the dump command with the init option, as if the device were
an unknown device. Backup Server will make a new entry for that device in
the configuration file.

Dump to Remote Backup Server

Since release 10.0, it has been possible to dump to a Backup Server running
on a remote machine. There are a number of advantages to this strategy,
especially for users of SQL Server on SunOS machines who want to upgrade to
Solaris, as well as a number of disadvantages. In addition, some
restrictions apply.

Configuring Devices for Remote Backup Servers

You can configure as many devices on remote Backup Servers as you need to,
up to the stripe limit of 32 devices total. Here is the process for
configuring remote backup servers.

Set Up Local Machine

  1. Make sure the local SQL Server and Backup Server are running.

  2. Make sure the local interfaces file contains an entry for the remote
     machine's Backup Server. Only the "query" line is required. No aliases
     are allowed.

     ------------------------------------------------------------------
     Note
     You can check the name of the remote Backup Server by looking in
     its runserver file for the flag -S (UNIX) or /server (VMS).
     ------------------------------------------------------------------

  1. Execute sp_helpserver SYB_BACKUP to verify the proper network name for
     your local Backup Server. There doesn't need to be an entry in
     sysservers for the remote Backup Server, but it makes testing easier.

     ------------------------------------------------------------------
     WARNING!
     Do not modify SYB_BACKUP to point to your remote Backup Server's
     name.
     ------------------------------------------------------------------

Set Up Remote Machine

  1. Make sure the remote Backup Server is running.

  2. Make sure the remote Backup Server does not have the same network name
     as the one displayed in step 3 above.

     ------------------------------------------------------------------
     Note
     There doesn't need to be an entry in the remote interfaces file
     for any of the servers on the local machine.
     ------------------------------------------------------------------

Remote Dump Syntax

The syntax for dumping to a remote Backup Server is as follows:

dump [database | transaction] dbname to remote_dev
at remote_BSvr_name

where remote_dev is the physical name of the device configured at the remote
Backup Server, and remote_BSvr_name is the name of the remote Backup Server.

     ------------------------------------------------------------------
     Note
     You must always use the absolute pathname of the device in remote
     dumps; you cannot use a logical device name.
     ------------------------------------------------------------------

Likewise, you can load from the remote Backup Server using the following
syntax:

load [database | transaction] dbname from
remote_dev at remote_BSvr_name

Remote Use of sp_volchanged

To execute sp_volchanged at the remote Backup Server, add the remote_device
at remote_BSvr syntax to the command, for example:

sp_volchanged 11, '/dev/0mn' at remote_BSvr_name,
'PROCEED'

Advantages of Dumping to Remote Backup Server

Dumping to a device (disk or tape) on a remote machine can be useful in the
following circumstances:

   * The local tape drive fails
   * You need to store your dump on different media; for example, your local
     machine has a 4mm drive but you need the dump on an 8mm tape
   * You wish to dump to a disk file, but the only available space is on
     another machine
   * The local machine has no tape drive

Disadvantages of Dumping to Remote Backup Server

There are two primary disadvantages to remote dump and load:

   * It is much slower, due to the network I/O involved
   * Because it is implemented through DB-Library (which uses blocking I/O),
     it causes the local Backup Server to wait until network I/O is done,
     resulting in performance degradation

Bear this in mind when you consider setting up remote dump and load for your
site; you may wish to do remote dumps and loads when there is no or little
other activity on the local Backup Server.

SunOS (BSD) to Sun Solaris Dumps and Loads

Customers running on SunOS machines may use this feature to upgrade their
databases to Solaris (because SQL Server 11.0 is not available for SunOS).
There are two ways to do this:

   * Do a local dump on the SunOS side and a remote load on the Solaris side
   * Do a remote dump on the SunOS side and a local load on the Solaris side

In either case, you can then use online database and the databases will
upgrade automatically.

     ------------------------------------------------------------------
     WARNING!
     You cannot use this method to load master.
     ------------------------------------------------------------------

Restrictions on Cross-Platform Dumps/Loads

     ------------------------------------------------------------------
     WARNING!
     Be aware that cross-platform dumps/loads other than SunOS to
     Solaris have not been tested and Sybase does not support them 
     you try them entirely at your own risk.
     ------------------------------------------------------------------

SunOS to Solaris is one of the few possible cross-platform dump/load
actions; most others are not supported. You can do cross-platform dump/load
only if the platforms are compatible, as follows:

   * The byte ordering must be the same (least significant vs. most
     significant byte first)
   * The floating point format must be the same
   * The page size must be the same
   * Both platforms must compile any stored procedures in the same way

Additionally, all stripes in the dump must be either UNIX or VMS, there may
be no mixing.

It is possible to dump to a remote Backup Server on a different platform
than the one on which your database resides, and then to remote load from
that remote Backup Server to the original database or to another database on
the same platform from which you made the original dump.

     ------------------------------------------------------------------
     Note
     Sybase definitely does not support dumping to a tape, moving that
     tape physically to a different platform, and trying to load from
     that tape, because there is no guarantee that the device drivers
     are the same between platforms.
     ------------------------------------------------------------------

Tape Stacker Support

Question

Does Sybase support tape stackers, also called autoloading tape devices or
robot systems?

Answer

Currently, Sybase does not plan to support tape stackers directly. Tape
stackers require additional software to be fully functional, since there is
as yet no operating system that provides native stacker support.

We are currently working with several VARs that support tape stackers, and
integrations are planned. Sybase Technical News will keep you updated on
this front.

Problems Installing on OpenVMS 6.2

Question

I've been unable to install Sybase products from global media on my system
running OpenVMS 6.2. I get access violation errors when I run the VMSINSTAL
process. What's wrong?

Answer for 10.x and Later

Check the global part number of your media. If it is any of the following,
order a reshipment of the products by calling Sybase Customer Service at
1-800-8-SYBASE:

   * 61500-xx-0002-96 TID: 296
   * 61500-xx-0407-01 TID: 407
   * 61500-xx-0419-01 TID: 419

The above part numbers may have shipped up until 9/95. At that time the
latest TID (419) was re-released with an unloader program fixed to run on
OpenVMS 6.2.

When your reshipment arrives, check to be sure it contains new global media
with this part number:

   * 61500-xx-0419-02 TID: 419

     ------------------------------------------------------------------
     Note
     The new version of TID 419 has 02 as the last digits of its part
     number, rather than 01.
     ------------------------------------------------------------------

This should solve your problem.

Answer for 4.9.2

If you are experiencing access violations running VMSINSTAL, order a
reshipment of products from Sybase Customer Service at 1-800-8- SYBASE. When
your reshipment arrives, check to be sure it contains new global media with
this part number:

   * 61500-xx-0435-01 TID: 435

You should be able to unload from this new media without trouble.

Problems Parsing Column Name

Question

I have a query in SQL Server 10.x of the form:

select * from t1 where dbo.t1.c1 = ...

Why is it that if the dbo executes this command there is no problem, but if
other users execute this command, SQL Server returns this error:

Msg 107, Level 15, State 1:

Server `SYBASE', Line 1:

The column prefix `dbo.t1' does not match with a
table name or alias name used in the query.

Answer

The parser is complaining about the mismatch between "t1" and "dbo.t1". Try
reforming your query as follows:

select * from dbo.t1 where dbo.t1.c1 = ...

Tables are specified in the from clause of a query. References in other
clauses refer back to the specification in order to figure out what table is
being talked about. The name by which a table will be known is in the from
clause: the specified name if given alone, or its alias if given with an
alias. Every other clause must use the name that appears in that clause. The
from clause names the table as "t1",so the where clause must refer to "t1" 
it may not use "dbo.t1", because the latter refers to a table that doesn't
appear in the from clause.

Explanation

Tables owned by different owners can have the same name, so dbo.t1 in the
from clause might refer to fred.t1 when executed by user "fred". In that
case, dbo.t1 is different for a user not "fred"; going ahead with the query
would return a cartesian product from fred.t1, with one row from fred.t1 for
every row in dbo.t1 where dbo.t1.c1 satisfies the search conditions, because
there is no join between it and dbo.t1.

The fact that dbo can run this query at all is an extension to ANSI SQL.

Indexes and Dirty Reads

Question

Given an employee table containing 100,000 rows with two indices:

   * a unique index on empnum
   * a non-unique index on lastname

when I execute the following query interactively, in isql,

select * from employees where lastname = "SMITH"

I get an immediate response. When I execute it in an application, the
response time is much slower. Why is this?

Answer

Check the isolation level of your session versus your application by using
the command select @@isolation for the session and looking at the query in
the application to see if it includes an isolation change. In the context of
isolation level 0, also known as "read uncommitted" or "dirty read,"
optimizer will only consider unique indexes. You can give the command set
showplan on and then run your query to see what indexes are being used

If you have set the isolation level to 0 for the transaction in your
application, the optimizer will not use the index on lastname, resulting in
a table scan using the empnum index.

Cursor Integrity

Problem

I declared a cursor for select * from a table in one isql session, fetched a
few records, then inserted some more records from another isql session.
Under SQL Server version 10, the new records are inserted at the end of the
last page and the cursor can fetch those records.

But under SQL Server version 11, if the same table is partitioned, and after
a few fetches, if the cursor is at the last page and the insert happens on
previous pages, the fetch doesn't get those newly inserted records.

It seems as though partitioning is the problem, because the cursor finds the
newly-inserted rows meeting the search condition if the table isn't
partitioned.

Is this a bug?

Answer

This behavior isn't a bug. It is, however, a factor you must consider in
deciding whether or not to partition a table. In release 10, SQL Server
didn't use partitioned tables, so if you are counting on the old behavior,
you should not partition your table.

In a partitioned table, inserts are distributed; in that respect it acts as
though it had an index on some key that directs the insert point. Inserts
can go to the end of any of the page chains (partitions), so there are
multiple insertion points (though not as many as an indexed table actually
has). Here is a thumbnail sketch to give you an idea of what this looks
like:

[Image]

From a logical perspective, there is no guaranteed way to force the cursor
to see the new rows. From a physical perspective, there are ways in SQL
Server release 11 to force this behavior, but they are non-standard and
subject to change. These ways include:

   * Revert back to a non-partitioned heap.
   * Add an identity column, a nonclustered index on that column and force
     the cursor to use that index with forceindex. Note that this method
     will probably affect performance negatively.

Replication Server NLM Compatibility

Question

According to the release bulletin for Replication Server 10.1 on Netware,
the Replication Server NLM was built with Open Client/Open Server 10.0.2 and
therefore cannot be installed in the same directory or run at the same time
as earlier versions of Open Client/Open Server or SQL Server 10.0.1 or
earlier.

I thought SQL Server was not built on Open Server  is this just a concern
for Backup Server?

Is the restriction lifted with SQL Server 10.0.2.3 or 10.0.2.4?

Answer

The problem here is that Netware doesn't have the concept of static linking.
A library is just another style of NLM with additional features.

As only one copy of an NLM can be loaded at a time, the Replication Server
must run using the same libraries as the SQL Server.

So if Replication Server requires libraries at 10.0.2, SQL Server must be
able to run on those libraries as well, and SQL Server 10.0.1 cannot run on
10.0.2 connectivity libraries.

This restriction will not be lifted with future releases, because it is a
restriction imposed by the operating system, not by Sybase.
----------------------------------------------------------------------------

Disclaimer: No express or implied warranty is made by Sybase or its
subsidiaries with regard to any recommendations or information presented in
SYBASE Technical News. Sybase and its subsidiaries hereby disclaim any and
all such warranties, including without limitation any implied warranty of
merchantability of fitness for a particular purpose. In no event will Sybase
or its subsidiaries be liable for damages of any kind resulting from use of
any recommendations or information provided herein, including without
limitation loss of profits, loss or inaccuracy of data, or indirect special
incidental or consequential damages. Each user assumes the entire risk of
acting on or utilizing any item herein including the entire cost of all
necessary remedies.

Staff

Principal Editor: Leigh Ann Hussey

Contributing Writers:
Jerold Brenner, Joel Brown, Ian Cairns, Dwayne Chung, Leigh Ann Hussey,
Howard Sardis, Kathy Saunders

Send comments and suggestions to:

Sybase Technical News
6475 Christie Avenue
Emeryville, CA 94608

or send mail to technews@sybase.com

Copyright 1996  Sybase, Inc. All Rights Reserved.
                                   Q10.3.4

            Sybase Technical News Volume 5 Number 3, August 1996

This issue of Sybase Technical News contains new information about your
Sybase software. This newsletter is intended for Sybase customers with
support contracts. You may distribute it within a sup ported site; however,
it contains proprietary information and may not be distributed publicly.
Sybase Techni cal News and the troubleshooting guides are included on the
AnswerBase CD, SupportPlus Online Services Web pages, and the Sybase
PrivateLine forum of CompuServe. Send comments to technews@sybase.com. To
receive this document by regular email, send name, full internet address and
customer ID to technews@sybase.com.

In this Issue

Tech Support News/Features

   * 1996 Technical Support North American Holiday Schedule
   * Download EBFs and Information from
     Sybase ESD

SQL Server General

   * Guidelines for Using dbcc with System 11
   * SQL Server and Serialized Log Writes
   * sp_who Showing loginame Field Change
   * Backup Server Performance
   * OpenVision High Availability Configuration
     and SQL Server

Scripts, Procedures & Code

   * Passing a UNIX Variable to a SQL Script
   * Size Conversion Stored Procedures
   * tli_mapper Script and User Guide

Connectivity / Tools / PC

   * Urgent Data Setup Requirements
   * Explanation of "nrpacket: recv,
     Connection timed out"

Bug Reports

   * Bug 66639/73421 - alter database in VLDB May Cause 605 Errors

1996 Technical Support North American Holiday Schedule

Sybase Technical Support is open on all holidays and provides full service
on many. During the limited-service holidays shown below, Technical Support
will provide the following coverage:

   * SupportPlus Preferred and Advantage customers may log all cases; we
     will work on priority 1 and 2 cases over the holiday.
   * 24x7 and 24x5 Support customers may log priority 1 cases; we will work
     on these over the holiday.
   * SupportPlus Standard, Desk Top, and Regular Support customers may
     purchase Extended-hour Technical Support for coverage over the holiday.
      Sybase Technical Support
      limited-service holidays
           U.S. customers

        Holiday       Date
      Labor Day   September 2
      ThanksgivingNovember 28
      Christmas   December 25

     Sybase Technical Support
    limited-service holidays 
        Canadian customers

        Holiday           Date
 Labour Day           September 2
 Canadian ThanksgivingOctober 14
 Christmas Day        December 25
 Boxing Day           December 26

If you have questions, please contact Technical Support.

Download EBFs and Information from Sybase ESD

The Sybase Electronic Software Distribution (ESD) System lets Sybase
customers with active support agreements download bug fixes and information
via the World Wide Web. This article describes how to do it.

Register for SupportPlus Online Services

If you have not done so already, you must register for SupportPlus Online
Services before you can access Sybase ESD.

  1. Connect to the Sybase Web page at http://www.sybase.com

  2. Click on Services & Support.

  3. Click on Sybase Enterprise Technical Support.

  4. Click on SupportPlus On-line Services Registration.

  5. On the registration screen, fill in your contact ID (leave blank if you
     do not know your contact ID), your Internet e-mail address, your
     (user-defined) password, your company name, your Sybase customer
     number, your name, and your telephone number.

  6. Click on the Submit Registration Form button.

Your registration form is then electronically checked. If it is correct, you
may immediately begin using SOS. If any information is missing from the form
or information does not correspond with our records, a Sybase customer
service representative will verify the information, complete the
registration and either contact you or send you an e-mail to inform you of
your registration status.

Accessing Sybase ESD

To access Sybase ESD, follow these steps:

  1. Follow the hyperlinks from the Sybase, Inc. homepage to the SupportPlus
     Online Services login screen and log in.

  2. Select Electronic Software Distribution from the list of options.

  3. Select the EBF you wish to download from the list generated for you.

     ------------------------------------------------------------------
     Note
     This list shows all available EBFs for products for which you are
     licensed. It is updated regularly. Selecting and downloading an
     EBF creates a log entry which Sybase uses to update your customer
     profile.
     ------------------------------------------------------------------

Where to Find Current EBF and Bug Information

You can use ESD to find out current EBF availability on your licensed
products and platforms, and to view cover letters.

Finding Bugs Fixed in a Particular EBF

If you have an EBF number and wish to find out if a particular bug is fixed
in it, follow these steps:

  1. If you are on the first page, enter the EBF number in the field
     provided. If you have loaded a page with EBFs applicable to you, go to
     step 2.

  2. Click on View Cover Letter - on the first page, it is a button:

     [Image]

     On your custom EBF list, it is a field in a table:

     [Image]

  3. Select Edit[Image]Find from the browser menus.

  4. Enter the bug ID number and click Find.

Site Requirements for Sybase ESD

You need the following things in order to use Sybase ESD successfully.

An Internet Connection

ESD is accessible only through the World Wide Web (WWW), which is reachable
only through an established Internet connection.

Netscape Navigator

Sybase ESD requires Netscape Navigator software, or any other browser that
supports Secure Sockets Layer, to provide a secure link between your
workstation and the Sybase Technical Support database.

Netscape Navigator is available for the following platforms:

   * 386/486/Pentium (BSDI)
   * Apple Macintosh  MacOS 7.x, (Power PC optional)
   * Digital AXP (Digital UNIX 2.0)
   * Windows 3.1, Windows 95, Windows NT, Windows for Workgroups
   * HP 9000/800 (HP-UX 9.03)
   * IBM RS/6000 (AIX 3.2)
   * Silicon Graphics (IRIX 5.2)
   * Sun SPARC (Solaris 2.3, SunOS 4.1.3)

The following table lists the hardware requirements for Netscape Navigator
on various platforms:
            Netscape Navigator hardware requirements

 Platform/Processor Disk Space Recommended Memory Minimum Memory
 MS Windows/386sx   1MB        1MB                1MB
 Macintosh/68020    2MB        8MB                4MB
 UNIX/not applicable3MB        16MB               16MB

     ------------------------------------------------------------------
     Note
     A 14.4 Kbps or higher speed modem is recommended.
     ------------------------------------------------------------------

If you do not already have Netscape Navigator installed, you can send an
e-mail to sales@netscape.com or call Netscape at (415) 528-2555 for
purchasing information (Netscape Navigator list price: $39.00). Netscape
Navigator is supported by Netscape Communications Corporation.

A Proxy Server

If there is a firewall between your system and the Internet connection, you
must use a proxy server that supports Secure Sockets Layer (SSL) protocol,
or use an approved NSC SOCKS server. Check with your network administrator
to get the details on the SOCKS or proxy server being used at your location.

     ------------------------------------------------------------------
     Note
     Sybase provides an on-line FAQ regarding SSL when you click on the
     registration option for SupportPlus Online Services.
     ------------------------------------------------------------------

Guidelines for Using dbcc with System 11

Sybase offers a suite of utilities for system operations. The fundamental
system maintenance utilities are dump, load, and dbcc. These are your
building blocks for developing a system maintenance and operations strategy.

Customers have wondered whether they should run dbcc prior to every backup.
This is the same issue faced by all large system designers: when, in what
manner, and how often does the application need to have a database backup?

The application design challenge is to balance the amount of data exposure
risk against the time and capital required to maintain the system. System
backups and data consistency checking are a part of the overall system
design.

About dbcc

The dbcc utility has many diagnostic features, including data and index
consistency features. You can use dbcc proactively to detect small problems
before they become larger problems.

When you execute dbcc utility prior to a system backup, it notifies you of
potential problems that must be corrected before backup may occur. If a
database has corrupt data pages and the corruption is not fixed prior to a
backup, under very rare circumstances recovery from that backup could fail.

When to Use dbcc

While the quality initiative implemented with SQL Server 11 allows you
greater flexibility in system maintenance, it should be considered that the
dbcc utility has repeatedly been proven to be a useful tool in identifying
both hardware and other software problems which in turn could affect the
integrity of a Sybase database. For stable systems with large databases, you
should run dbcc periodically, depending on your system situation, the data
volatility, and the data exposure risk for the project as a whole.

The following summarizes when to use dbcc:

   * To determine the extent of possible damage after a system error occurs.
   * If you are experiencing a period of hardware or computer facility
     failures.
   * If you suspect that a database is damaged. For example, if using a
     particular table generates the message "Table corrupt," the dbcc
     utility can determine whether other tables in the database are also
     damaged.
   * After installing new revisions of software, both database and
     application. You may want to run the dbcc utility for a few days or
     weeks just to be sure that everything is running smoothly.
   * Periodically, as part of your business continuity plan.

SQL Server and Serialized Log Writes

Question

What is the advantage of serial over parallel log writes? Why does SQL
Server use serial writes by default?

Answer

When devices are mirrored on a system, the system can perform I/O in
parallel (with asynchronous I/O completion) or in serial (where one I/O must
complete before the next one begins).

By default, SQL Server uses serialized, rather than parallel, writes to
mirrored log devices, in order to guarantee that at least one log will be
uncorrupted on recovery. Here are two examples of how serialized log writes
reduce the risk of corruption from hardware failure:

   * SQL Server writes in 2k pages, but most controllers write in 512-byte
     blocks. Thus, one log could be corrupted if a hardware error occurred
     which interrupted a SQL Server page write.
   * Most smart controllers write blocks in the fastest order, which may not
     be 1-2-3-4. If the controller can write 3-4 now and can wait to write
     1-2 when the disk spins around again, it will. In this case, a failure
     that interrupts the controller might leave the last log page partially
     written, for that log.

With serial writes, in both these cases, the other log will still be
uncorrupted. SQL Server knows which is the uncorrupted log because the side
of the mirror that failed is so flagged.

sp_who Showing loginame Field Change

Question

Why does sp_who show the procedure owner, instead of the current user, in
the loginame field when that user executes a stored procedure?

For example, suppose user "john" is logged in and sp_who shows the loginame
as "john". When "john" executes a stored procedure owned by "sa", sp_who
shows "sa" as the value for loginame even though "john" is executing that
stored procedure. Why is this?

Answer

When recompiling, the procedure is put into creation time context, and the
user is set to owner of the procedure. This is expected behavior and is
required to resolve the name of referenced objects.

Backup Server Performance

Sybase has received many different questions about dump/load performance.
This article is a brief summary of the information you need to solve
dump/load performance problems.

Backup Server Data Flow

Imagine the flow of dump/load data as water in a pipe. On one end of the
pipe are the the database disks, on the other, the archive devices:

[Image]

If any section of the pipe is narrower than the others, the total flow will
be no greater than can pass through that section.

Depending on your hardware, the areas in which to look for bottlenecks are:

   * Disk I/O throughput
   * SCSI database disk controller throughput
   * Archive device I/O throughput
   * SCSI archive device controller throughput
   * Main bus
   * SCSI adapters
   * CPU usage

Finding Data Flow Bottlenecks

In order to determine where the bottlenecks are, you must:

   * Monitor system statistics
   * Determine the number of I/Os and the I/O rate through the various
     hardware components.

Use utilities like sar or iostat (both for UNIX) to monitor system
statistics. On most platforms you can monitor disk I/O rate, number of I/Os
per disk, and system CPU utilization. The amount of time in system, process
and waiting for I/O is important.

You need the exact hardware configuration and published I/O rates for the
various I/O subsystem components. Some typical numbers that have been
measured in dump/load benchmarks are shown in the following table
                          I/O rates from benchmarks

  System Component                        Rates/Limits
                    50 I/Os per second.2000K/second (using up to 8
 database disk      stripes).1600K/second (using 8-16 stripes).Fast and
                    wide SCSI has the highest rate. SCSI II has the next
                    highest rate. SCSI has the lowest.

 SCSI controller    No more than 3 tape devices on one controller.No more
                    than 4 database disks on one controller.
 adapter and main
 bus                Actual rate is 25% of the published figure.
 8mm tape           500K/second.
 8mm compressing
 tape (Exabyte      1000K/second.
 8505)
 4mm tape           183K/second.
 4mm compressing
 tape               366K/second.

Additional Information

Here are two additional items to consider, both of them from the standpoint
of Backup Server itself, rather than the hardware:

   * The dump algorithm does not load balance over stripes. Each stripe gets
     approximately the same amount of data.
   * Mixing fast and slow archive devices is not good from a performance
     standpoint, since the same amount of data will be written to both
     devices and will not complete until all data is written to the slow
     device.

Example Analysis

Suppose a database is resident on one disk. Assume that the database disk
SCSI controller, bus adapter, main bus, and archive device SCSI adapter have
unlimited throughput. The archive devices are 8mm compressing tape drives.
You should consider two questions:

   * How many stripes should I use to get the quickest dump time?
   * How long will it take to dump 1GB of data?

Here are your datapoints for analysis:

   * Since the database resides only on one disk, the fastest that data can
     come off the disk is 2000K/second.
   * Given that the 8mm compressing tape drive can run at 1000K/second, the
     configuration that will yield the fastest dump rate would be a two
     stripe dump.
   * Adding more stripes will yield no better results; the disk becomes the
     bottleneck in that case (actually, using more stripes will slow the
     dump down since the tapes may not be able to stream).

Thus, to dump a 1GB database using two stripes should take about eight
minutes, at the rate of 2000K/second.

OpenVision High Availability Configuration and SQL Server

The following is an advisory about using the to OpenVision High Availability
software configuration application with Open Client Client-Library and SQL
Server.

History of the Issue

During testing of a System 10 Client-Library application on AIX, connecting
to various 4.9.2 (EBF 4157) Solaris-based SQL Servers, a customer noticed
that ct_cancel, dbcancel, or Ctrl-C calls (all out-of-band dataOOBDcalls)
would hang the application if directed to certain SQL Servers.

All machines involved were running Solaris 2.3. The SQL Servers that hung
had one thing in common: they were also running OpenVision High Availablity
(HA) software.

OpenVision HA provides for automatic and transparent retargeting of one SQL
Server's processing to another in the event of a failure in the primary
machine. Two or more machines are connected to a common set of dual ported
disk drives. An OpenVision HA process runs on each machine in the
configuration, and detects and allows for failover of Sybase operations to
the remaining machine(s).

Explanation and Action

The source of the problem turned out to be that an OpenVision HA script was
setting the TCP/IP parameter tcp_old_urp_interpretation to a value of 0.
When this value was changed to 1, and the application reconnected to the
server, ct_cancel events no longer caused a hang. The 1 setting allowed SQL
Server to interpret and handle OOBD properly.

If your Solaris platform includes OpenVision HA, DB-Library or
Client-Library, and SQL Server, check the value of
tcp_old_urp_interpretation and be sure it is set to 1, by following these
steps:

  1. Execute the following command:

     /usr/sbin/ndd -get /dev/tcp tcp_old_urp_interpretation

  2. If the value returned is not 1, execute the following command:

     ndd -set /dev/tcp tcp_old_urp_interpretation 1

For more information on OpenVision High Availability software, please visit
their URL, http://www.ov.com.

Passing a UNIX Variable to a SQL Script

Question

How can I pass a UNIX variable to a SQL script?

Answer

Use UNIX level variables with isql inside a shell script file.

Example

In the following example, a database name is passed to a script that gets
sp_helpdb information and directs it to a file, /tmp/helpdb_dbname.

#!/bin/sh
#---------------------------------------------------------------
# name: sql_var_passer [v1.0]
# usage:       sql_var_passer <dbname>
#
# comments:
#  -    This program invokes isql, which reads input from this
#       file down to the line that says "EOF".
#  -    $1 is the argument <dbname> -- that is, the first arg to
#          the program.
#  -    $pwd is on a line by itself inside the input for isql in
#       order to prevent the password from being echoed in the
#       output from "ps", as a security precaution.
#----------------------------------------------------------------

SYBASE=/usr/sybase
server="myserver"
user="sa"
pwd=""

$SYBASE/bin/isql -U${user} -S${server} <<EOF >/tmp/helpdb_$1
$pwd
sp_helpdb $1
go
EOF
echo ""
cat /tmp/helpdb_$1
echo ""

#=================== END OF SCRIPT==========================

Here is a sample run of the example script:

sql_var_passer master

name     db_size  owner   dbid    created       status
-------- -------- -------- ------ ------------- ---------------
master   3.0 MB   sa      1      Jan 01, 1900  no options set

device_fragments        size     usage           free kbytes
----------------------- --------- -------------- -----------
master                 3.0 MB   data and log    896

device                                         segment
---------------------------------------------- ----------------
 master                                        default
 master                                        logsegment
 master                                      system

(return status = 0)

Size Conversion Stored Procedures

Summary

The following stored procedures provide tools for doing a variety of size
conversions, as follows:

   * sp_convertpages  accepts an argument representing a given number of 2k
     pages, and will return the equivalent units of bytes, kilobytes,
     megabytes and blocks
   * sp_convertmegs  accepts an argument representing a given number of
     megabytes, and will return the equivalent units of bytes, kilobytes,
     blocks and pages
   * sp_convertkbs  accepts an argument representing a given number of
     kilobytes, and will return the equivalent units of bytes, megabytes,
     blocks and pages
   * sp_convertbytes  accepts an argument representing a given number of
     bytes, and will return the equivalent units of kilobytes, megabytes,
     blocks and pages

     ------------------------------------------------------------------
     Note
     If you are running on a Stratus platform, which uses 4k pages, you
     will have to edit the sp_convertpages script accordingly.
     ------------------------------------------------------------------

Save the scripts to a file named sp_convertprocs, and then execute the
following command to add the stored procedures to sybsystemprocs:

isql -Usa -Ppassword < sp_convertprocs

     ------------------------------------------------------------------
     WARNING!
     Thes scripts are neither maintained nor supported by Sybase
     Technical Support. You may modify them to suit your installation.
     ------------------------------------------------------------------

The Scripts

/* FileName:

sp_convertprocs

   To Implement:

isql -Usa -P < sp_convertprocs

   History:

22-Jan-96 v1.0 Creation [wg]

*/

use sybsystemprocs
go

/* The sp_convertpages procedure starts here */

create proc sp_convertpages @num_pgs_in int as

set nocount on
declare @total_byts_of_pgs int
declare @num_meg int
declare @num_blks int
declare @num_pages int
declare @num_kb int
declare @blockstring char(35)
declare @pagestring char(35)
declare @megstring char(35)
declare @kbstring char(35)
declare @bytstring char(35)

select @num_pages=@num_pgs_in
select @total_byts_of_pgs=(@num_pages * 2048)
select @num_blks=@total_byts_of_pgs/512
select @num_meg=@total_byts_of_pgs/1048576
/* where 1048576 is 1024*1024 bytes/meg) */

select @num_kb=@total_byts_of_pgs/1024

select @pagestring= "Total Pages Input     : " + convert(char(35),@num_pages)

select @blockstring="Number of Blocks      : " + convert(char(35),@num_blks)

select @megstring= "Number of Megabytes   : " + convert(char(35),@num_meg)

select @bytstring="Number of Bytes       : " + convert(char(35),@total_byts_of_pgs)

select @kbstring="Number of Kilobytes   : " + convert(char(35),@num_kb)

create table #space_convert_table(Storage_Conversion_Utility char(35))

insert #space_convert_table values (@pagestring)
insert #space_convert_table values ("--------------------------")
insert #space_convert_table values (@bytstring)
insert #space_convert_table values (@kbstring)
insert #space_convert_table values (@blockstring)
insert #space_convert_table values (@megstring)

select * from #space_convert_table
go

grant execute on sp_convertpages to public
go

/* The sp_convertmegs procedure starts here */

create proc sp_convertmegs @num_megs_in int as

set nocount on
declare @total_byts_of_megs int
declare @num_megs int
declare @num_blks int
declare @num_pages int
declare @num_kb int
declare @blockstring char(35)
declare @pagestring char(35)
declare @megstring char(35)
declare @kbstring char(35)
declare @bytstring char(35)

select @num_megs=@num_megs_in
select @total_byts_of_megs=@num_megs * 1048576
/* where 1048576 is 1024*1024 bytes/meg) */

select @num_blks=@total_byts_of_megs/512
select @num_pages=@total_byts_of_megs/2048
select @num_kb=@total_byts_of_megs / 1024

select @megstring= "Total Megs Input    : " + convert(char(35),@num_megs)

select @pagestring="Number of Pages     : " + convert(char(35),@num_pages)

select @blockstring="Number of Blocks    : " + convert(char(35),@num_blks)

select @bytstring=  "Number of Bytes     : " + convert(char(35),@total_byts_of_megs)

select @kbstring= "Number of Kilobytes : " + convert(char(35),@num_kb)

create table #space_convert_table(Storage_Conversion_Utility char(35))

insert #space_convert_table values (@megstring)
insert #space_convert_table values ("--------------------------")
insert #space_convert_table values (@bytstring)
insert #space_convert_table values (@kbstring)
insert #space_convert_table values (@blockstring)
insert #space_convert_table values (@pagestring)

select * from #space_convert_table
go

grant execute on sp_convertmegs to public
go

/* The sp_convertkbs procedure starts here */

create proc sp_convertkbs @num_kb_in int as

set nocount on
declare @total_byts_of_kb int
declare @num_megs int
declare @num_blks int
declare @num_pages int
declare @num_kb int
declare @blockstring char(35)
declare @pagestring char(35)
declare @megstring char(35)
declare @kbstring char(35)
declare @bytstring char(35)

select @num_kb=@num_kb_in

select @total_byts_of_kb=@num_kb * 1024

select @num_blks=@total_byts_of_kb/512

select @num_pages=@total_byts_of_kb/2048

select @num_megs=@num_kb/1024

select @megstring= "Number of Megs      : " + convert(char(35),@num_megs)

select @pagestring="Number of Pages     : " + convert(char(35),@num_pages)

select @blockstring="Number of Blocks    : " + convert(char(35),@num_blks)

select @bytstring=  "Number of Bytes     : " + convert(char(35),@total_byts_of_kb)

select @kbstring=   "Total Kilobytes     : " + convert(char(35),@num_kb)

create table #space_convert_table(Storage_Conversion_Utility char(35))

insert #space_convert_table values (@kbstring)
insert #space_convert_table values ("--------------------------")
insert #space_convert_table values (@bytstring)
insert #space_convert_table values (@megstring)
insert #space_convert_table values (@blockstring)
insert #space_convert_table values (@pagestring)

select * from #space_convert_table
go

grant execute on sp_convertkbs to public
go

/* The sp_convertbytes procedure starts here */

create proc sp_convertbytes @num_blks_in int as

set nocount on
declare @total_byts_of_blks int
declare @num_meg int
declare @num_blks int
declare @num_pages int
declare @num_kb int
declare @blockstring char(35)
declare @pagestring char(35)
declare @megstring char(35)
declare @kbstring char(35)
declare @bytstring char(35)

select @num_blks=@num_blks_in

select @total_byts_of_blks=@num_blks * 512

select @num_pages=@num_blks/ 4

select @num_meg=@total_byts_of_blks/1048576 /* where 1048576 is 1024*1024 bytes/meg)*/

select @num_kb=@total_byts_of_blks / 1024

select @blockstring="Total Blocks Input   : " + convert(char(35),@num_blks)

select @pagestring= "Number of Pages      : " + convert(char(35),@num_pages)

select @megstring= "Number of Megabytes  : " + convert(char(35),@num_meg)

select @bytstring=  "Number of Bytes      : " + convert(char(35),@total_byts_of_blks)

select @kbstring=  "Number of KiloBytes  : " + convert(char(35),@num_kb)

create table #space_convert_table(Storage_Conversion_Utility char(35))

insert #space_convert_table values (@blockstring)
insert #space_convert_table values ("-----------------------------------")
insert #space_convert_table values (@bytstring)
insert #space_convert_table values (@kbstring)
insert #space_convert_table values (@megstring)
insert #space_convert_table values (@pagestring)

select * from #space_convert_table
go

grant execute on sp_convertblocks to public
go

Procedure to Drop Size Conversion Procedures

The sp_dropconverts stored procedure allows for the orderly drop of the size
conversion procedures.

create proc sp_dropconverts as

if exists (select * from sysobjects where name = "sp_convertblocks")
        begin
                drop proc sp_convertblocks
        end

if exists (select * from sysobjects where name = "sp_convertpages")
        begin
                drop proc sp_convertpages
        end

if exists (select * from sysobjects where name = "sp_convertmegs")
        begin
                drop proc sp_convertmegs
        end

if exists (select * from sysobjects where name = "sp_convertkbs")

        begin
                drop proc sp_convertkbs
        end

if exists (select * from sysobjects where name = "sp_convertbytes")
        begin
                drop proc sp_convertbytes
        end
go

grant execute on sp_dropconverts to public
go

tli_mapper Script and User Guide

The script included in this article is distributed to assist DBAs with the
translation of various IP addresses and port numbers to TLI (Transport Layer
Interface) strings, and back again. You may find it most handy during large
scale network addressing changes.

     ------------------------------------------------------------------
     Note
     The mapping of TLI strings may contain platform specific address
     family and padding values (explained below). Check your current
     interfaces file to confirm those values and make any changes
     necessary to tli_mapper.
     Also note that this utility requires a running SQLServer to
     perform the calculations. Update tli_mapper accordingly to reflect
     your $SYBASE and $DSQUERY values.
     ------------------------------------------------------------------

     ------------------------------------------------------------------
     WARNING!
     This script is neither maintained nor supported by Sybase
     Technical Support. You may modify it to suit your installation.
     ------------------------------------------------------------------

TLI Creation/Change Implications

TLI address strings refer to an access path across which the SQL Server and
clients communicate. These strings are embedded in the interfaces file, and
reflect hex-translated references to the IP address and port number, along
with some TLI structure-specific information.

You ordinarily use sybinit to create the interfaces file. However, when
local network designations or internal addresses change, you can make
changes two ways:

   * Delete and add SQL Servers with sybinit
   * Converte the existing TLI addresses manually

     ------------------------------------------------------------------
     WARNING!
     Be very careful when you consider manually editing the interfaces
     file. Each line is highly sensitive to the tab and positional
     format references. If you inadvertantly insert a space or
     character the SQL Server may be unable to start up, or clients may
     be unable to communicate with it. When you must translate and
     change long, cryptic TLI numeric strings like this one:
     x00021E6C9D0E7D240000000000000000
     the chance for error increases yet again.
     ------------------------------------------------------------------

The tli_mapper Utility

The tli_mapper utility has been created to ease the transition of SQL
Servers to new network addresses.

When invoked, tli_mapper accepts either of the following:

   * IP address/port number combination, in which case it produces a
     translated TLI Address string, or
   * TLI Address string, in which case translates it to a specific IP
     address and port number.

If you have to change a large number of SQL Servers to new addresses, you
can simply pass each address/port number combination to tli_mapper, and use
the return value to update the interfaces file accordingly. The process
should significantly lower the risk of making a transition to a new network
addressing scheme.

Further Noteworthy Items

Any change you make to the interfaces file should be consistent with the
current address and port of the machine. Make sure that any tests of a new
interface file target a machine running the new address location.

In a production environment, the time window to perform the changeover might
not be long enough to perform the conversions and edits. You may want to
consider making a copy of the interfaces file and use that as a source for
your changes. During tests, you may be able to selectively change targets
and connect to them using the
-Iinterfacesfilename option of isql. When it is time for the actual cutover,
you can simply rename the file to reflect the true interfaces file name.

The tli_mapper utility was developed for a Solaris machine, which uses an
"address family" value of 0002 and 16 zeros as TLI address padding
(explained in more detail below). This numbering scheme is fairly common on
other TLI-based platforms. Please review your current interfaces file, note
the address family/pad spaces in use, and if different, update the variables
ADDRESS_FAMILY and PAD_SPACES in tli_mapper accordingly.

Note also that tli_mapper can interrogate a particular IP address to detect
activity on the port. This option is handy to check for a SQL Server's
ability to respond to client requests.

The Interfaces File and TLI Structure

Here is a typical interfaces file entry:

SERVERNAME
query tli tcp /dev/tcp x00021E6C9D0E7D240000000000000000
master tli tcp /dev/tcp x00021E6C9D0E7D240000000000000000

The lines which begin with query and master are referred to as service
lines. This table describes the parts of the service line:
                    Interfaces file entry line breakdown

  Line
  Part      Value                          Description

        query,        Identifies the service:query  where clients connect
 1      master, or    to find the servermaster  where servers listen for
        console       connectionsconsole - used for the dump/load process
                      (not in versions 10.x and after)

 2      tli           Indicates that this is an entry for a machine with a
                      TLI-based programming interface.
                      Used by utility programs, including sybtli, to
 3      tcp           indicate an entry for a TCP/IP interface rather than
                      SPX (Novell) or StarLan.
                      The device file which acts as an interface between
 4      /dev/tcp      the user program and the networking software; the
                      network "end point."
        The TLI
 5      network       See Table 7.
        address

Here is the breakdown of the parts of the TLI network address
00021E6C9D0E7D240000000000000000:
                        TLI Network Address Breakdown

   Address Part                          Description
                 Denotes that this entry is a TLI "address family". This is
                 always at the start of a TLI address. TCP/IP is family
                 2.Depending on the network vendor and the byte order of
                 the machine, this works out as a hexadecimal "0002" (most
 0002            common) or "0200" (the format is dependent on whether the
                 machine is "little endian" or "big endian"). Take a look
                 at how your current interfaces file is structured to
                 confirm your address family number format, and make a
                 change to the variable ADDRESS_FAMILY in tli_mapper
                 accordingly.
                 This is the hexadecimal equivalent of the port number. In
 1E6C            this example, the hexadecimal address 1E6C translates to
                 the decimal address 7788.
                 This 8-digit hexadecimal address is the translation of the
                 decimal IP address equivalent. The address is formed by
                 translating each decimal portion of the IP address,
                 separated by the period, to its hexidecimal
                 equivalent(minus the periods). Single digits are entered
 9D0E7D24        with a leading zero.
                 * 9D [Image] 157
                 * 0E [Image] 14
                 * 7D [Image] 125
                 * 24 [Image] 36

                 This closes out the TLI address. This 16 zero set serves
                 to pad the remaining TLI address and is mandatory at the
 0000000000000000end of the address string. Note that the number of padded
                 zeros is platform specific. Please check the number of
                 padded zeros in your current interfaces file, and adjust
                 the variable PAD_SPACES in tli_mapper accordingly.

Sample Runs of tli_mapper Utility

This section contains some examples of tli_mapper in action.

No Line Arguments

Executing tli_mapper with no line arguments returns the usage information.

tli_mapper

 _____________________________
|                             |
| *** >> TLI-IP Mapper << *** |
|_____________________________|
Usage:
------
     tli_mapper [-t]
                [-i]
                [-x]
where
     [-t] translates a TLI string to IP/PORT number
     [-i] translates an IP/PORT number to a TLI string
     [-x] examines an IP/PORT combination for activity
Input Format Examples:
----------------------
HOSTADDRESS >>  157.14.125.36
PORTNUMBER  >>  7756
TLISTRING   >>  00021E4C9D0E7D240000000000000000
Using the -t Flag

Executing tli_mapper with the -t flag returns an address and port number
based on the TLI string you enter. The computer prompts are included in this
example; user input is indicated by boldface.

tli_mapper -t

 _____________________________
|                             |
| *** >> TLI-IP Mapper << *** |
|_____________________________|
Enter TLI String  >>  00021E4C9D0E7D240000000000000000
=================================
Completed IP Translation String :
IPADDRESS :  157.14.125.36
PORT      :  7756
=================================
... program exiting.

Using the -i Flag

Executing tli_mapper with the -i flag returns a TLI string based on the
address and port number you enter. The computer prompts are included in this
example; user input is indicated by boldface.

tli_mapper -i

 _____________________________
|                             |
| *** >> TLI-IP Mapper << *** |
|_____________________________|
Enter IP Address   >>  157.14.125.36
Enter Port Number  >>  7788
=================================
Completed IP Translation String :
x00021E6C9D0E7D240000000000000000
=================================
... program exiting.

Using the -x Flag

Executing tli_mapper with the -x flag tests the address and port number you
enter for activity. The computer prompts are included in this example; user
input is indicated by boldface.

tli_mapper -x

 _____________________________
|                             |
| *** >> TLI-IP Mapper << *** |
|_____________________________|
Enter IP Address   >>  157.14.125.36
Enter Port Number  >>  7788
*** Testing for IP and PORT activity :
==========================================
IP Address  : 157.14.125.36
Port Number : 7788
is ACTIVE and in a LISTEN state.
==========================================
... program exiting.

Closing Comments

tli_mapper requires that a SQL Server be up and running to perform the
translations, but there is no error checking in this version for the
existence of such a server.

Before you run tli_mapper, make sure that the variables $SYBASE and $DSQUERY
reflect your operating environment and edit them if necessary.

Make the script executable with the following command before you try to run
it:

chmod +x tli_mapper

The Script

#!/bin/csh -f

# Name: tli_mapper

# Generic UNIX script to take perform IP Address
# and Port Number translations to a TLI String,
# or perform TLI to IP/Port Number translations.
# This script can also interrogate a particular
# IP Address/Port number combination to see if
# a process is listening on the port.

#------------------------------------------------
# History:
# 22-Jan-96 v1.0 creation [wg]
# 18-Jun-96 v1.1 slight mods to embedded isql
#                command lines [am & lah], addition
#                of formatting help lines [am]
#+++++++++++++++++++++++++++++++++++++++++++++++++

umask 0
setenv SYBASE /usr/u/sybase
setenv DSQUERY SYBASE

echo ""
echo " _____________________________"
echo "|                             |"
echo "| *** >> TLI-IP Mapper << *** |"
echo "|_____________________________|"
echo ""

if ($#argv == 0) then
        set SW = "0"
        else
        set SW = "`echo $argv[1] | cut -d"-" -f2`"

endif

if ($SW != "i" && $SW != "t" && $SW != "x") then
echo "Usage: "
echo "------"
echo "     tli_mapper [-t]"
echo "               [-i]"
echo "               [-x]"
echo "where"
echo ""
echo "     [-t] translates a TLI string to IP/PORT number "
echo "     [-i] translates an IP/PORT number to a TLI string"
echo "     [-x] examines an IP/PORT combination for activity"
echo ""
echo "Input Format Examples:"
echo "----------------------"
echo "HOSTADDRESS >>  157.14.125.36"
echo "PORTNUMBER  >>  7756"
echo "TLISTRING   >>  00021E4C9D0E7D240000000000000000"
echo ""
exit
endif

if ($SW == "t") then
echo ""
echo "Enter TLI String in the format: "
# Help with format...
echo "AdrfPortIpIpIpIp0000000000000000"
echo -n ">> "
set tlist = ($<)
echo $tlist | grep "x" > /dev/null

        if ($status == 0) then
                echo ""
                echo "... please leave off the leading x"
                echo ""
        exit
        endif

        if ($tlist == "") then
                echo ""
                echo "*** No String Input - TLI String. Exiting. "
                echo ""
        exit
        endif

set ct=1
set tlistring=${tlist}
set total_ip_address=""

$SYBASE/bin/isql -Usa -P <<EOS >! /tmp/vrstring
set nocount on
select substring("${tlistring}",5,4)
select substring("${tlistring}",9,2)
select substring("${tlistring}",11,2)
select substring("${tlistring}",13,2)
select substring("${tlistring}",15,2)
go
EOS

foreach BASEVALS ('cat /tmp/vrstring | grep -v "-" ')

$SYBASE/bin/isql -Usa -P <<SYBMARK >! /tmp/holder
set nocount on
select hextoint("${BASEVALS}")
go
SYBMARK
set individual_entry=`cat /tmp/holder|grep -v "-" `

if ($ct == 1) then
        set port_number=$individual_entry
        else if ($ct == 2) then
        set total_ip_address=$individual_entry
else
        set total_ip_address=$total_ip_address.$individual_entry
endif

set ct=`expr $ct + 1`
end

echo ""
echo "================================= "
echo "Completed IP Translation String : "
echo ""
echo "IPADDRESS :  $total_ip_address "
echo "PORT      :  $port_number"
echo ""
echo "================================= "
echo ""

else if ($SW == "i") then
echo ""
echo "Enter IP Address in the following format:"
# Help with format
echo "nnn.nnn.nnn.nnn"
echo -n ">> "
set ipa = ($<)

if ($ipa == "") then
echo ""
echo "*** No String Input - IP Address. Exiting. "
echo ""
exit
endif

echo "Enter Port Number in the following format:"
echo "AdrfPortIpIpIpIp0000000000000000"
echo -n ">> "
set iport=($<)

if ($iport == "") then
echo ""
echo "*** No String Input - Port Number. Exiting. "
echo ""
exit
endif

set DOM="`echo $ipa | cut -d. -f1\Q"
set ARE="\Qecho $ipa | cut -d. -f2\Q"
set LOC="\Qecho $ipa | cut -d. -f3\Q"
set MAC="\Qecho $ipa | cut -d. -f4\Q"
set ct=1

$SYBASE/bin/isql -Usa -P <<EOS >! /tmp/port
set nocount on
select right((select inttohex(${iport})),5)
go
EOS

set ronport=\Qcat /tmp/port | grep -v "-" \Q
foreach IPSTRING ( ${DOM} ${ARE} ${LOC} ${MAC} )

$SYBASE/bin/isql -Usa -P <<SYBMARK >! /tmp/holder
set nocount on
select right((select inttohex(${IPSTRING})),3)
go
SYBMARK
set individual_entry=\Qcat /tmp/holder|grep -v "-" \Q

if ($ct == 1) then
set total_ip_address=$individual_entry
else
set total_ip_address=$total_ip_address$individual_entry
endif

set ct=\Qexpr $ct + 1\Q
end

echo ""
echo "================================= "
echo "Completed IP Translation String : "
echo ""
echo "x0002${ronport}${total_ip_address}0000000000000000"
echo ""
echo "================================= "
echo ""

else if ($SW == "x") then
echo ""
echo "Enter IP Address in the following format: "
echo "nnn.nnn.nnn.nnn"
echo -n ">> "
set ipa = ($<)

if ($ipa == "") then
echo ""
echo "*** No String Input - IP Address. Exiting. "
echo ""
exit
endif

echo "Enter Port Number in the following format: "
echo "AdrfPortIpIpIpIp0000000000000000"
echo -n ">> "
set iport=($<)

if ($iport == "") then
echo ""
echo "*** No String Input - Port Number. Exiting. "
echo ""
exit
endif

echo ""
echo ""
echo "*** Testing for IP and PORT activity : "
echo ""
rsh ${ipa} netstat -a | grep ${iport} | grep LISTEN > /dev/null

if ($status == 1) then
echo "=========================================="
echo "WARNING: Selected Port Number ${iport}"
echo "is not active on IP Address ${ipa}"
echo "=========================================="
echo ""

else

echo "=========================================="
echo "IP Address  : ${ipa}  "
echo "Port Number : ${iport}   "
echo "is active and in a LISTEN state."
echo "=========================================="
echo ""
endif

echo "... program exiting. "

Urgent Data Setup Requirements

Question

How do I set up out-of-band or urgent data for SQL Server with a PC client?

Answer

In order for the "urgent" flag to have an effect, both the PC and the host
must use out-of-band data (OOBD) per the RFC 793 specification.

There are two standards for OOBD, RFC 793 (single bit), and RFC 1122 (seven
bit). Sybase software uses only the single bit version. Operating system and
other software vendors vary in which standard they use. Solaris, for
example, implements RFC 1122 as the default for OOBD. This means that
Net-Library will not be able to pass the correct data, since it is using an
urgent bit instead of an urgent byte. ODBC uses RFC 1122 as well, but you
can also implement the RFC 793 style of OOBD with the 34cancel parameter.

     ------------------------------------------------------------------
     Note
     RFC stands for Request For Change, which is how changes to IEEE
     standards are tracked.
     ------------------------------------------------------------------

The following vendors support OOBD:
                           Vendors supporting OOBD

  Vendor                               Comments

 FTP       Supports RFC 793, but the default standard is RFC 1122. You must
 PC/TCP    load ethdrv.exe with the -b option (it stands for BSD style,
           another name for RFC 793) to use RFC 793 style urgent data.
           Almost all TCP/IP protocol stacks support RFC 793 in their
 TCP/IP    current implementations. Some older versions of the software
           (for example, LWP 4.0) did not have this style of OOBD
           implemented and needed a patch in order to do so.
 IPX/SPX   Supports RFC 793.
 Named
 Pipes     Supports RFC 793.

DECNet does not support OOBD.

Most hosts that have TCP/IP support RFC 793, and need no modifications to
use this style of urgent data. The exceptions are:

   * Solaris 2.2  needs patch 101018-06 & -04
   * Solaris 2.3  needs patch 101346-03

In both these versions of Solaris, you may need to execute this command:

/usr/sbin/ndd -set /dev/tcp tcp_rexmit_interval_max XXXX

where XXXX is the number of milliseconds (500 for 1/2 second) by which to
increase the TCP retransmit interval.

If you are running on NCR Unix System V.4 2.0.2, you must change the
following parameter in the file /etc/conf/pack.d/tcp/space.c:

int tcp_bsd42_urgent = 0

to

int tcp_bsd42_urgent = 1

You must then recompile, rebuild the kernel, and restart the operating
system.

Question

How do SQL Server 11.0 and 10.x differ from 4.9.x in their handling of
out-of-band data?

Answer

In-band urgent data is sent in the normal stream of communications. SQL
Server versions prior to 10.x use only RFC 793 OOBD and don't use in-band
attention signals. Version 10.x and later SQL Servers use in-band attention
signals, and can also respond to OOBD by using the RFC 793 style urgent bit.

This means that clients running DB-Library versions older than 10.x can
communicate a dbcancel() to a version 10.x and later SQL Server. However,
because a System 10 client does not use RFC 793 at all, it will not send
urgent data to a pre-System 10 server.

Net-Library versions prior to 10.x use OOBD, and can't do in-band attention.
Net-Library versions 10.x and later will do only in-band attention, and will
not send OOBD.

Explanation of "nrpacket: recv, Connection timed out"

Question

I occasionally see the following message in my error log:

nrpacket: recv, Connection timed out.

What does it mean?

Answer

This is an OS error message. It is raised when SQL Server calls the recv
function, but the connection has failed either before or during the recv
function. It is not a SQL Server error; SQL Server merely reports it.

Explanation

There are a few reasons why the connection might have failed:

   * The client exited without going through a disconnect, as when someone
     switches off their PC. (This is the most likely cause.)
   * The network is extremely busy and is dropping packets.
   * KEEPALIVE is disabled for the OS, so TCP/IP times out the connection
     when the client has been idle for some time.

Bug 66639/73421 - alter database in VLDB May Cause 605 Errors

Question

Some time after executing alter database in a very large database, I got a
605 error referring to the system table sysgams:

Attempt to fetch logical page 0 in database my_huge_db belongs to object id 99, not to object sysgams

There seems to be no database corruption. What's going on?

Answer

While there are other causes of 605 errors, the reference to sysgams in the
605 error points to the cause being bug 66639 (SQL Server 10.x) or the
related bug 73421 (SQL Server 11.x), both of which have been fixed recently.

     ------------------------------------------------------------------
     Note
     The bug only occurs with alter database and not with create
     database.
     ------------------------------------------------------------------

Explanation

The error might occur any time after you use alter database to increase the
size of a database through a boundary which is a multiple of 63GB (63, 126,
189 or 252). The sysgams table is used to manage new allocations. Using
alter database to extend the size of the database through the 63GB boundary
fails to extend the sysgams table as it should. The 605 error occurs as SQL
Server looks for the non-existent page in sysgams.

While there should be no data corruption as a result of this bug, no new
allocations can take place without raising a 605, rendering the database
unusable for writing.

The only workaround known at present, should you encounter this bug, is to
bcp the data out, drop and re-create the database with the larger size, and
bcp the data back in. If this is not feasible, it is possible for Sybase
Technical Support to patch a new sysgams extent.

Are You at Risk?

There are two types of customers who are at risk for this bug:

   * Those who will want to alter the size of their database past one of the
     63GB boundaries in the near future.
   * Those who have already altered the database through one of the 63GB
     boundaries but who have not yet had any 605 errors.

Again, this bug has been fixed both in 11.x and 10.0.x. If you think you are
at risk for this bug, Sybase recommends you get EBF 6201 and above for SQL
Server 10.x (Bug #66639). Bug #73421 appeared only in beta releases of SQL
Server 11.x and was fixed for the GA release.

     ------------------------------------------------------------------
     Note
     Once the alter database has been done with an EBF not containing
     the fix, the problem will persist (if you have it at all) until
     patched out. Get the EBF and install it before you do the alter
     database.
     ------------------------------------------------------------------

Disclaimer: No express or implied warranty is made by Sybase or its
subsidiaries with regard to any recommendations or information presented in
SYBASE Technical News. Sybase and its subsidiaries hereby disclaim any and
all such warranties, including without limitation any implied warranty of
merchantability of fitness for a particular purpose. In no event will Sybase
or its subsidiaries be liable for damages of any kind resulting from use of
any recommendations or information provided herein, including without
limitation loss of profits, loss or inaccuracy of data, or indirect special
incidental or consequential damages. Each user assumes the entire risk of
acting on or utilizing any item herein including the entire cost of all
necessary remedies.

Staff

Principal Editor: Leigh Ann Hussey

     FAQ Administrative Note: The credits gif was whacked so I'm
     missing some Sybase folks who contributed to this document. Sorry.

Send comments and suggestions to:

Sybase Technical News
6475 Christie Avenue
Emeryville, CA 94608

or send mail to technews@sybase.com

Copyright 1996  Sybase, Inc. All Rights Reserved.
                                   Q10.3.5

            Sybase Technical News Volume 5 Number 4, October 1996

This issue of Sybase Technical News contains new information about your
Sybase software. This newsletter is intended for Sybase customers with
support contracts. You may distribute it within a supported site; however,
it contains proprietary information and may not be distributed publicly.
Sybase Technical News and the troubleshooting guides are included on the
AnswerBase CD, SupportPlus Online Services Web pages, and the Sybase
PrivateLine forum of CompuServe. Send comments to technews@sybase.com.To
receive this document by regular email, send name, full internet address and
customer ID to technews@sybase.com.

In this Issue

Tech Support News/Features

   * Changes to Technical News
   * Download EBFs and Information from
     Sybase ESD
   * Tech Info Library Reports
   * 1996 Technical Support North American Holiday Schedule

SQL Server General

   * Installing Sybase Products from CD-ROM on AIX 3.2.5
   * Additional Space Needed When Creating Database Devices on AIX
   * Dump Option "with retaindays" Explanation
   * Char(N) Variables Appear not to Concatenate

Connectivity / Tools / PC

   * Using the BCP_IN Sample Script to load bcp Files
   * Using the BCP_OUT Sample Script to Archive Table Data

----------------------------------------------------------------------------

Changes to Technical News

Sybase documentation is undergoing a massive restructuring in order to make
it more useful to customers both internal and external. After five years and
hundreds of articles, Sybase Technical Newsis changing along with the rest
of Sybase's documentation.

For the last several issues Sybase Technical News has been mailedto
customers only on AnswerBase CD (though it has been available through a
number of online systems), with the option to order hardcopy; however, as
there have been no orders for hardcopy in the last year, we will be
discontinuing the option to order it.

Sybase Technical News will continue to be a vehicle for hottopics, and will
eventually be even more useful to you, as it will bedynamically generated
according to parameters specified by you.

After this issue, Sybase Technical News will go on a briefhiatus, but look
for its return in a new, dynamic format, on the WorldWide Web and through
our new Knowledge Base.

Accessing SupportPlus Online Services

Sybase SupportPlus Online Services (SOS) is your World Wide Web connection
to many of Sybase's services, including our Tech Info Library and Electronic
Software Distribution (ESD).

Register for SupportPlus Online Services

Follow these steps to register for SOS:

  1. Connect to the Sybase Web page athttp://www.sybase.com
  2. Click on Services & Support.
  3. Click on Sybase Enterprise Technical Support.
  4. Click on SupportPlus On-line Services Registration.
  5. On the registration screen, fill in your contact ID (leave blank if you
     do not know your contact ID), your Internet e-mail address, your
     (user-defined) password, your company name, your Sybase customer
     number, your name, and your telephone number.
  6. Click on the Submit Registration Form button.

Your registration form is then electronically checked. If it is correct, you
may immediately begin using SOS. If any information is missing from the form
or information does not correspond with our records, a Sybase customer
service representative will verify the information, complete the
registration and either contact you or send you an e-mail to inform you of
your registration status.

ESD

The ESD system lets Sybase customers with active support agreements download
bug fixes and information via the World Wide Web. Follow these steps to
reach it:

  1. Follow the hyperlinks from the Sybase, Inc. homepage to the SupportPlus
     Online Services login screen and log in.
  2. Select Electronic Software Distribution from the list of options.

From there, you can download EBFs for your licensed platforms, or view EBF
coverletters to see which bugs are fixed in a particular EBF.

Tech Info Library Reports

Standard reports regularly updated in the Tech Info Library area of SOS
include:

   * Product Availability Reports
   * Bug List (updated weekly)
   * Certification Reports for these Sybase products:
   * SQL Server
   * Replication Server
   * Connectivity products
   * System Management produts

1996 Technical Support North American Holiday Schedule

Sybase Technical Support is open on all holidays and provides full service
on many. During the limited-service holidays shown below, Technical Support
will provide the following coverage:

   * SupportPlus Preferred and Advantage customers may log all cases; we
     will work on priority 1 and 2 cases over the holiday.
   * 24x7 and 24x5 Support customers may log priority 1 cases; we will work
     on these over the holiday.
   * SupportPlus Standard, Desk Top, and Regular Support customers may
     purchase Extended-hour Technical Support for coverage over the holiday.

 Sybase Technical Support
 limited-service holidays
      U.S. customers

   Holiday       Date
 ThanksgivingNovember 28
 Christmas   December 25

     Sybase Technical Support
    limited-service holidays 
        Canadian customers

        Holiday           Date
 Canadian ThanksgivingOctober 14
 Christmas Day        December 25
 Boxing Day           December 26

If you have questions, please contact Technical Support.

Masthead

Staff

Principal Editor: Leigh Ann HusseyContributing Writers: Sybase CS&S InfoComm
Team, Sybase Technical Support

Send comments and suggestions to Sybase Technical News,6475 Christie Avenue,
Emeryville, CA 94608, or email to technews@sybase.com.

This issue of Sybase Technical News contains newinformation about your
Sybase software. This newsletter is intended forSybase customers with
support contracts. You may distribute it withina supported site; however, it
contains proprietary information and maynot be distributed publicly. All
issues of Sybase TechnicalNews and the troubleshooting guides are included
on theAnswerBase CD, SupportPlus Online Services Web pages, and theSybase
PrivateLine forum of CompuServe.

To receive this document by regular email, send name, full internetaddress
and customer ID to technews@sybase.com.

Disclaimer

No express or implied warranty is made by Sybase or its subsidiarieswith
regard to any recommendations or information presented inSybase Technical
News. Sybase and its subsidiaries herebydisclaim any and all such
warranties, including without limitation anyimplied warranty of
merchantability of fitness for a particularpurpose. In no event will Sybase
or its subsidiaries be liable fordamages of any kind resulting from use of
any recommendations orinformation provided herein, including without
limitation loss ofprofits, loss or inaccuracy of data, or indirect special
incidental orconsequential damages. Each user assumes the entire risk of
acting onor utilizing any item herein including the entire cost of all
necessaryremedies.

Installing Sybase Products from a CD-ROM Attached to an IBM RS/6000 Running
AIX 3.2.5

     ------------------------------------------------------------------
     Note
     This technote is a correction of the earlier version, which
     referred to the install program as "sybinstall", instead of
     "sybsetup".
     ------------------------------------------------------------------

Summary

AIX 3.2.5 cannot correctly read a CD formatted using the Rockridge
Extensions standard common to many platforms. Due to this incompatibility,
the sybsetup program is not recognized. Consequently, you cannot run
sybsetup from a CD on an AIX 3.2.5 machine to install Sybase products.
Instead, use the non- GUI sybload utility.

AIX 4.x is capable of reading the Sybase CD correctly.

Attributes
 OS: AIX 3.2.5         Version: 10.x, 11.0
 Platform: IBM RS/6000 Last Revision: 29-Aug-96
 Product: n/a          ID: 2370

Contents

This section steps you through unloading Sybase products from a CD-ROM with
the sybload utility.

Before You Begin

Complete the following tasks before you begin:

Fill out the worksheet in the product installation guide. You will use some
of this information in certain steps below. Make sure the SYBASE environment
variable is set to the correct release directory.

Unloading with sybload -D

To unload Sybase software from the CD-ROM to your machine:

  1. Place the CD-ROM in the drive.

  2. Log in as the "root" superuser.

  3. Mount the CD-ROM with a command like the following:

     /etc/mount -v cdrfs -r /dev/device_entry /cdrom

     Where device_entry names the CD-ROM drive, and cdrom names the
     directory where the drive is mounted.

  4. Log out and then log in as the System Administrator recorded on your
     worksheet, usually "sybase".

  5. Move to the SYBASE root directory:

     cd $SYBASE

  6. Start the sybload utility:

     /cdrom/sybload -D

  7. sybload prompts you for the following information:
              Prompt                              Value
                              Confirm that the current directory is the
      SYBASE directory        SYBASE root directory, or specify the correct
                              directory path.

      Local or remote         Enter "L" for local. You cannot execute a
      installation            remote installation from CD-ROM using
                              sybload.
      Name of disk file of    Enter the CD_ROM disk name recorded on your
      global archive          worksheet, usually /cdrom/sybimage.
      Customer Authorization  Enter the string from the software packaging
      String (CAS)            that allows you to access your products.
                              Select the products you want to install from
                              the sybload menu:

                                 * Enter the number of each product.

      Sybase products            * Press Return after each number.

                                 * When done making selections, press

                                   Return twice which creates a blank line.

                              sybload lists the products you chose. Enter:

                                 * yto confirm that the list is correct
      Product confirmation
                                 * qto quit

                                 * any other character to display the menu
                                   and select more products

  8. Do not interrupt the software unloading.

     Depending on the size of each product and the number of products you
     select, this process can take anywhere from a few minutes to half an
     hour. During the process sybload lists the files it is unloading. On
     completion, sybload displays a list of the unloaded products.

  9. Log out, and then log in as the "root" superuser.

 10. Unmount the CD-ROM:

     /etc/umount /cdrom

 11. Remove the CD from the drive.

     What's Next

Install, configure, and upgrade each product according to the the
corresponding installation and configuration documentation.

Additional Space Needed When Creating Database Devices on AIX

----------------------------------------------------------------------------

Summary

Allow 1MB for the Logical Volume Control Block when creating SQL Server
11.0.x devices on IBM RISC System/6000 AIX. This TechNote describes why and
points out a common disk-space issue that occurs.

Attributes
 OS: AIX             Version: 11.0.x
 Platform: RS6000    Last Revision: 10/11/96
 Product: SQL Server ID: 1383

Contents

Allow Space for LVCB

When you create a database device with disk init, you must allow 1MB on the
device for the LVCB, because of the way that AIX and Sybase together manage
space.

Factors that Affect Disk Space Allocation

Several factors affect how disk space is allocated:

   * SQL Server allocates space for databases in units of 256 pages, or
     1/2MB.
   * AIX creates logical volumes in physical partitions (PP), typically 4MB.
   * The LVCB uses 512 bytes allocated to a logical volume.
   * Sybase reserves the first 4K (2 pages) on a database device for the
     LVCB by setting vstart to 2.
   * Sybase's create and alter database commands use whole megabytes only.

When you try to create a database the same size as the logical volume, SQL
Server creates a smaller database than you requested because of the reserved
space. You must make the logical volume at least 1MB larger than the size of
the database you wish to put on it. The example in the next section
describes each step and its effect.

Example: Creating a Logical Volume

[Image]

The table below describes steps in creating a 20MB device.
    Step                    Description                       Action
             AIX requires that you create logical
 Create a    volumes, known as physical partitions     Create a 20MB
 logical     (PP). Physical partitions must be at      logical volume on
 volume      least 2MB, but typically are 4MB. For     AIX.
             example, you could create a logical
             volume of 16MB, 20MB, 24MB, and so on.
                                                       Subtract 2 pages
                                                       from the page size
                                                       of the logical
                                                       volume.
             disk init offsets the device by 2 pages
             by automatically setting vstart (the      10,240 pages - 2
 Run disk    device's starting page number) to 2. SQL  =10,238 pages.
 init        Server does this to avoid overwriting
             AIX's Logical Volume Control Block        disk init
             (LVCB).                                   name = "device1",
                                                       physname =
                                                       "physname1",
                                                       vdevno = 5,
                                                       size =10238
                                                       device1 is less than
                                                       20MB. Therefore,
             Round down to the nearest 1MB to avoid    create a 19MB
 Create the  fragmentation. The create and alter       database:
 database    database commands allow you to specify
             database size in whole megabytes only.    create database
                                                       database1 on device1
                                                       = 19

The Mysterious 1/2MB

In our example, we created a 20MB logical volume, and had less than 20MB for
disk init because of the LVCB.

We created a 19MB database on the device, even though disk init will allow
more.

If you create a larger database:

create database database2 on device1 = 20

Sybase tries to give you all the space possible, so it allocates to the
nearest 1/2MB for the database, resulting in 19.5MB. This causes
fragmentation later if you have to re-load the database: You cannot alter or
re-create a 19.5MB database, because the create and alter database commands
only accept whole-megabyte sizes.

For details on avoiding database fragmentation, see TechNote 1324, Segment
Remapping with load database When Moving a Database.

How disk init and buildmaster Set Disk Space

On AIX, SQL Server release 11.0.x disk init and buildmaster set vstart to 2
if vstart is unspecified or if vstart is specified as 0. vstart is the
starting page for the device. Setting vstart to 2 allows Sybase to skip the
first 2 pages to avoid overwriting AIX's Logical Volume Control Block
(LVCB).

The following table shows the effect of SQL Server on the logical volume.
   Command               Behavior                    Required Action
             Sets vstart to 2 if vstart is
                                               Subtract the vstart setting
 disk init   either:                           from the disk init size
             * Set to 0
             * Unspecified                     parameter.
             (Used during SQL Server           Subtract the vstart setting
 buildmaster installation) Offsets data in the from the buildmaster -s
             master device by 2 pages.         (size) parameter.

Your database device will be smaller than the size of your logical volume by
the number of pages set by vstart. The default setting is 2.

If you do not adjust the disk init size for vstart, error 5123 can occur:

disk init encountered an error while
attempting to open/create the physical file...

     ------------------------------------------------------------------
     Note
     See the Sybase SQL Server System Administration Guide for
     information about disk init and vstart.
     ------------------------------------------------------------------

Dump Option "with retaindays" Explanation

----------------------------------------------------------------------------

Summary

This Tech Note explains the function and use of dump with retaindays.

Attributes
 OS: All                Version: 11.0.x
 Platform: All          Last Revision: 16-Sep-96
 Product: Backup Server ID: 2600

Contents

Question

What is the dump option with retaindays? Can I use it on any platform?

Answer

The dump option retaindays is usable on all platforms, and works the same
way everywhere, but is really only meaningful in the case of dumps to disk
or to single-file tape (such as QIC).

If you dump to a disk or single-file tape device using the option with
retaindays and then try to overwrite it before its retention period has
expired without specifying with init on the subsequent command, Backup
Server will query whether to quit or continue.

If you dump to a multi-file tape device and do not specify with init, Backup
Server always appends, never overwrites, so the dump is not destroyed no
matter what. If you do specify with init, Backup Server always overwrites
without checking, regardless of whether the target has expired.

     ------------------------------------------------------------------
     Note
     There is a configurable "tape retention" parameter for
     sp_configure; the with retaindays option overrides that retention
     period. The default retention period is zero days - that is,
     "already expired".
     ------------------------------------------------------------------

Char(N) Variables Appear Not to Concatenate

----------------------------------------------------------------------------

Summary

Given two fixed char(N) strings of different length, the shorter string will
be padded with blanks to equal the length of the longer string. The result
gets assigned to the variable.

Attributes
 OS: All             Version: All
 Platform: All       Last Revision: 16-Sep-96
 Product: SQL Server ID: 2601

Contents

Question

Given the following declaration:

declare @var1 char(5)

If I perform the query select @var1 = @var1 + "A", the result has only one
"A" in @var1. It's not concatenating the char variable.

Is this a bug?

Answer

The result that you see is correct. Suppose, given @var1 char(5), you
execute a simple select:

select @var1 = "A"

The result of that select is "A " (an `A' with 4 blank spaces after it). If
you then execute select @var1 = @var1 + "A", this is what happens:

  1. A temporary result is created, "A A", that is 6 characters long.

  2. That temporary result is truncated on the right so that it is 5
     characters long, yielding "A " - that is, no change from the original
     value of @var.

This behavior for char(N) strings is consistent with ANSI: given two fixed
strings of different length, the shorter string will be padded with blanks
to equal the length of the longer string. The result gets assigned to the
variable.

If you want to have strings that don't get padded with blanks, you must use
varchar(N).

Using the BCP_IN Sample Script to Load bcp Files

----------------------------------------------------------------------------

Summary

This document details the BCP_IN sample script. You can use BCP_IN to
automate loading bcp files of user database tables. You edit the sample to
suit your environment.

For information on automated creation of table data in databases, see the
companion TechNote, "Using the BCP_OUT Sample Script to Create Tables".

Attributes
 OS: UNIX            Version: 4.9.2, 10.x, 11.x
 Platform: All       Last Revision: 6/26/96
 Product: SQL Server ID: 993

Contents

The BCP_IN sample script's command lines and routines are intended for you
to use as a model for creating a script that is easy to maintain and
troubleshoot.

     ------------------------------------------------------------------
     Note
     You can copy the sample script, contained in the BCP_IN file on
     Compuserve OpenLine. Search on the keyword "bcp".
     ------------------------------------------------------------------

Edit Considerations

In tailoring the sample script to your needs, consider these characteristics
of the script:

   * Only user databases (type="U") qualify. System databases do not
     qualify, including master, model, tempdb, and sybsystemprocs.
   * The bcp batch size during data input is set to 100. To change it,
     search on the string BATCHER.
   * BCP_IN does not check whether the log/data space is full during the
     bulk copy process.
   * BCP_IN does not check whether the bcp option, select into/bulkcopy, is
     being set in the target database.

You must review the output carefully for any errors. This script does not
detect all processing failures that may occur at runtime, such as privilege
errors or running out of space within the bcp target directory.

     ------------------------------------------------------------------
     Note
     Sybase Technical Support does not support this script.
     ------------------------------------------------------------------

Debugging Tip

Use the C-shell option "-xvf" instead of "-f" in the line at the top of bcp
files to aid in debugging. For example, "#!/bin/csh -xvf".

Editing Environment Variables

Once you have copied or created the script, edit the environment variables
to reflect your current operating configuration.

To find the section to edit, search on the string ENVIRONMENTAL VARIABLE
SECTION. Each variable is commented to aid you in editing. The following
topics provide additional edit information.

Usage Switches

A usage statement is returned if you invoke BCP_IN without any switches. The
usage switches are:

   * [-a] to bcp all qualifying bcpfiles
   * [-s] to bcp select databases from user input
   * [-s <db1> <db2> ...] to bcp the listed databases To perform bcp..in for
     one or more databases, invoke the script with the -s switch. Follow
     this switch with a list of select databases for bcp..in. If you do not
     provide a list of databases, the script prompts you for one.

     By default, BCP_IN performs a bcp command for each qualifying file
     (<database>.<tablename>.bcpfile) in the "holding" directory. You can
     change the <database>.<tablename>.bcpfile syntax to match your file
     naming conventions.

   * [-t] to bcp a single file for a specific table
   * [-d] to echo variables in use To check the current environment
     settings, invoke the BCP_IN script with the -d switch. This switch
     shows the active settings.

BCPFILEDIR Variable

Be sure to point the BCPFILEDIR environment variable to a directory that has
enough free space to store the bulk copy data.

Confirmation Options

Two confirmation options are available:

   * CONF_SELECT, which allows you to confirm the database choices before
     performing bcp commands
   * BCP_OVERWRITE, which prompts you when there are previous bcp files in
     the target directory

Enable these options by changing their default value to 1.

     ------------------------------------------------------------------
     Note
     Changing CONF_SELECT and BCP_OVERWRITE to 1 may cause unexpected
     results in cron jobs.
     ------------------------------------------------------------------

bcp Command Settings

To change the column or row terminators or any other bcp command setting,
edit the bcp command line. Search for the string "$BCP $DBNAME.." .

cron Submission

BCP_IN allows for submission of bcp commands via the UNIX cron command.

For example, you could schedule bcp of the pubs2 database for every Sunday,
one minute after midnight, in cron as follows:

[Image]

The actual cron command file syntax depends on your UNIX environment.

Run a Test First!

Once you have edited the sample script to reflect your current environment,
we recommend that you test and fine-tune it before using it in a production
environment.

Example Scenario

For example, you could create a test server with two or three small
databases where each of the databases contains a few tables with minimal
data. Let us say that you have a 300 MB database called testdb where:

   * 250 MB of data space is reserved
   * 100 MB of data is actually used
   * Log uses 50 MB

In this example, you need disk space for other applications, so you have
decided to recreate testdb to reflect 250 MB (200 MB data/50 MB log).

The normal dump/load database process does not allow you to load a larger
database into a smaller one. YUse your edited version of BCP_IN and BCP_OUT
to transfer the data as follows:

  1. Change the file permissions mode to executable:

     chmod +x bcp_in

     chmod +x bcp_out

  2. Bulk copy out the data using the single database switch:

     bcp_out -s testdb

     For details, see the companion TechNote, "Using the BCP_OUT Sample
     Script to Create Tables."

  3. Invoke isql and perform the following tasks:

     >> isql -Usa -P

     1> use master
     2> go

     1> sp_dboption "testdb","select into","true"
     2> go

     1> use testdb
     2> go

     1> checkpoint
     2> go

     1> quit

     These tasks do as follows:

   * Drop testdb and recreate it using the new sizes.
   * Recreate all database tables and accompanying schema.
   * Turn on the select into/bulkcopy option.

  1. Bulk copy in the data, using the -s switch:

     bcp_in -s testdb

  2. Dump the database. You can optionally turn off the select into/bulkcopy
     option.

After completing the steps in this example, you are ready to perform
subsequent backups/recovery.

BCP_IN Sample Script

# -----------------------------------------------------
# ENVIRONMENTAL VARIABLE SECTION
# -----------------------------------------------------

# Specify where to place the bcp files.

setenv BCPFILEDIR "/tmp"

# SYBASE and DSQUERY reflect the operating environment

# for your server.

setenv SYBASE "/sybase"
setenv DSQUERY "my_servername"

# PASSWD most likely needs to reflect the sa.

setenv PASSWD "-Usa -P "

# Set BCP to the path for the bcp executable, which
# inherits the $SYBASE variable.

setenv BCP "$SYBASE/bin/bcp"

# ISQL reflects a valid path for both the isql
# executable and a valid command string.

setenv ISQL "$SYBASE/bin/isql $PASSWD -S$DSQUERY"

# Set the batch size for bcp in.

set BATCHER=100

# If you want a prompt to confirm your database
# choices, set the CONF_SELECT flag to 1.

setenv CONF_SELECT "0"

# If you want a prompt when bcp outfiles already exist
# for the database/table qualifiers chosen, set the
# BCP_OVERWRITE flag to 1. If the flag is set to
# "0"(default), existing bcp files will be overwritten.

setenv BCP_OVERWRITE "0"

# Do not modify the umask setting.

umask 0

# -----------------------------------------------------
# SELECT DATABASE PROCESSING SECTION
# -------------------------------------------------

# Set SELECT_DB to "1" for automatic lookup of values
# in sel_dblist. Otherwise, use the default "0".

setenv SELECT_DB "0"

I recommend deleting this:

# You can type in the names of databases to bcp out by placing
# the names between the quotes of sel_dblist.
# Make sure there are spaces between databases. This is
# only activated if select_db = 1 (above), and may not work
# properly anyway. On second thought, don't fool with this.

setenv SEL_DBLIST ""
# Do not modify the counter variables NO_TABLES FOUND
# and NOARGS.

set NO_TABLES_FOUND = 0
set NOARGS=1

if (-e /tmp/db_list) rm /tmp/db_list
if (-e /tmp/db_list1) rm /tmp/db_list1

if (-e /tmp/table_list) rm /tmp/table_list
if (-e /tmp/table_list1) rm /tmp/table_list1

if (-e /tmp/tmp_bcpdbs) rm /tmp/tmp_bcpdbs

# -----------------------------------------------------
# STARTING MENU
# -------------------------------------------------

echo ""
echo " ________________________________________"
echo "|                                        |"
echo "| >>> SYBASE BCPFILE INPUT AUTOMATOR <<< |"
echo "|________________________________________|"
echo ""

if ($#argv == 0) then
        set SW = "0"
else
        set SW = "\Qecho $argv[1] | cut -d"-" -f2\Q"
endif

if ($SW != "a" && $SW != "s" && $SW != "d" && $SW !=
"t") then
        echo "Usage: "
        echo "------"
        echo "     bcp_in [-a]"
        echo "            [-s]"
        echo "            [-s] <dbname1> <dbnameX> ... "
        echo "            [-t]"
        echo "            [-d]"
        echo "where"
        echo ""
        echo "&ensp;[a] bcp in all qualifying bcpfiles "
        echo "&ensp;[-s] bcp in select databases from user input"
        echo "&ensp;[-s <db1> <dbX> ...] bcp in listed databases"
        echo "&ensp;[-t] bcp in one file for a specific table"
        echo "&ensp;[-d] echos variables in use"
        echo ""
        exit

else if ($SW == "s" && "$#argv" >= "2") then
        set RCT = "1"
        while ($#argv > $RCT )
                set SELECT_DB = 1
                set VARCT=\Qexpr $RCT + 1\Q
                set SEL_DBLIST=($SEL_DBLIST $argv[$VARCT])
                set RCT = \Qexpr $RCT + 1\Q
        end

else if ($SW == "s" || $SW == "t") then
                set SELECT_DB = 1

else if ($SW == "d") then
                set YCK = 1

                if (! $?SYBASE) then
                        set SYBASE = "<Value not set - edit file>"
                        set YCK = 0
                endif

                if (! $?DSQUERY) then
                        set DSQUERY = "<Value not set - edit file>"
                        set YCK = 0
                endif

                if (! $?ISQL) then
                        set ISQL = "<Value not set - edit file>"
                        set YCK = 0
                endif

                if (! $?PASSWORD) then
                        set PASSWORD = "<Value not set - edit file>"
                        set YCK = 0
                endif

                if (! $?BCPFILEDIR) then
                        set BCP DIRECTORY = "<Value not set - edit
                        file>"
                        set YCK = 0
                endif

                echo " Variable List"
                echo "------------------------------"
                echo ""
                echo "SYBASE        = $SYBASE"
                echo "DSQUERY       = $DSQUERY"
                echo "ISQL          = $ISQL"
                echo "PASSWORD      = $PASSWD"
                echo "BCP DIRECTORY = $BCPFILEDIR"
                echo ""
                exit
endif

# Confirm database choice.

if ($SELECT_DB == 1 && $SW != "t") then
        echo ""
        echo "Selective Database Option Enabled."
        echo ""

        if ($SW == "s" && $#argv < 2) then
                echo "Enter Database Names (one at a time)"
                echo "and terminate list with a <CR>."
                echo ""

                set ct=1
                set CHKIT=1
                set TWOSTRIKES = 1

                if (-e /tmp/qualify) rm /tmp/qualify
                while ($ct == 1)
                        echo -n "DBNAME >> "
                        set QUALIFY=($<)
                        ls ${BCPFILEDIR}/${QUALIFY}* 2>&1
                        if ($status != 0) then
                                echo ""
                                echo "*** Warning: No files found for"
                                echo " &emsp;&ensp;that Database. Please try again."
                                echo ""
                                set CHKIT=0
                                set TWOSTRIKES = 0

                        else if ("$QUALIFY" == "" && $CHKIT == 1) then
                                echo ""
                                echo "*** Warning : please input a valid"
                                echo "    Database name <or x to exit>."
                                echo ""
                                set CHKIT=0
                                set TWOSTRIKES = 0

                        else if ("$QUALIFY" == "x") then
                                echo ""
                                echo "... Program exiting ..."
                                echo ""
                                exit

                        else if ("$QUALIFY" == "") then
                                if ($CHKIT == 0 && $TWOSTRIKES == 0) then
                                        echo ""
                                        echo "... Program exiting ..."
                                        echo ""
                                        exit
                                endif
                                set ct=0
                        else
                                echo $QUALIFY >> /tmp/qualify
                                set CHKIT=0
                        endif
                end

                        set SEL_DBLIST=\Qcat /tmp/qualify\Q
        endif
        echo ""

else if ($SW == "t") then
        echo ""
        echo "Single Table Option Enabled"
        echo ""
        set SELECT_DB=1
        echo -n "ENTER DBNAME    >>"
        set QUALDB=($<)
        set SEL_DBLIST="${QUALDB}"
        echo -n "ENTER TABLENAME >>"
        set QUALTBL=($<)
        echo ""
        if (! -e        ${BCPFILEDIR}/${QUALDB}.${QUALTBL}.bcpfile)
        then
                echo ""
                echo "*** Warning: No bcp files found for"
                echo "    that table. Please try again."
                echo ""
                ls ${BCPFILEDIR}
                echo ""
                echo "... program exiting ..."
                echo ""
                exit
        endif
        echo ${QUALDB} > /tmp/db_list

else if ($SELECT_DB != 1 ) then
        echo ""
        echo "All User Databases Option Enabled."
        echo ""

        ls ${BCPFILEDIR}/*.bcpfile 2>&1
        if ($status != 0) then
                echo ""
                echo "*** Warning: No qualifying files found for"
                echo "&emsp;&ensp;any Database. Please confirm choices
                echo "&emsp;&ensp;and invoke script again."
                echo ""
                echo "... program exiting ..."
                echo ""
                exit
        endif

        if (-e /tmp/bcpdbs) rm /tmp/bcpdbs
        touch /tmp/tmp_bcpdbs

        foreach file (\Qls ${BCPFILEDIR}/*bcpfile\Q)
                echo $file:t | cut -d. -f1 >> /tmp/bcpdbs
        end

        uniq /tmp/bcpdbs > /tmp/db_list

endif

# Obtain select database list, if enabled.

if ($SELECT_DB == "1") then

        foreach MANUAL_DB (\Qecho $SEL_DBLIST\Q)
        if (-e /tmp/sdb_list) rm /tmp/sdb_list

$ISQL $PASSWD << ENDCMDS > /tmp/sdb_list

if not exists(select name from master..sysdatabases
where name = "$MANUAL_DB") print "DBNOTFOUND"

go

ENDCMDS

        grep "DBNOTFOUND" /tmp/sdb_list > /dev/null
        if ($status == "0") then
                echo ""
                echo "*** Warning.. Incorrect Database Name. ***"
                echo ""
                echo "DATABASE :  $MANUAL_DB"
                echo ""
                echo "Processing of BCP files terminated. Please"
                echo "confirm proper spelling and invoke script"
                echo "again."
                echo ""
                echo "... Program Exiting ..."
                echo ""
                exit
        endif

# Close out database selection list verification.

        end

# Restore the selective database list.

        if (-e /tmp/db_list1) rm /tmp/db_list1
        touch /tmp/db_list1
        foreach SELDBNAME (\Qecho $SEL_DBLIST\Q)
                echo  $SELDBNAME >> /tmp/db_list1
        end

# Otherwise, validate the entire database list.

else
        set TMPCHK=0
foreach DB_FILE_NAMED (\Qcat /tmp/db_list\Q)

$ISQL $PASSWD << ENDCMDS >/tmp/db_list1

if not exists (select name from master..sysdatabases
where name= "${DB_FILE_NAMED}") print "DBDOESNOTEXIST"

go

ENDCMDS

        grep "DBDOESNOTEXIST" /tmp/db_list1 > /dev/null
        if ($status == 0) then
                echo ""
                echo "*** Warning.. Database Does Not Exist. ***"
                echo ""
                echo "DATABASE :  $DB_FILE_NAMED"
                echo ""
                echo "... continuing with next database ..."
                echo ""
                set TMPCHK=1
        else
                        set TMPCHK=0
        endif
end
if (${TMPCHK} == 1) then
                echo "*** Warning.. No Databases Qualify ***"
                echo ""
                echo "Processing of BCP files terminated. Please"
                echo "confirm proper spelling and invoke script"
                echo "again."
                echo ""
                echo "... Program Exiting ..."
                echo ""
                exit
        endif

cp /tmp/db_list /tmp/db_list1

# Complete the database selection processing.

endif

        echo "-----------------------------------------"
        echo "The following Database list qualifies for"
        echo "bcp operations:"
        echo "-----------------------------------------"
        cat /tmp/db_list1
        echo ""

if ($CONF_SELECT == 1) then
        echo -n "*** Press "y" to continue >>>"
        set p_cont=($<)
        if ($p_cont != "y" && $p_cont != "Y") then
                echo ""
                echo "... Program exiting ..."
                echo ""
                exit
        endif
endif

set TMPCHK=0

foreach DBNAME (\Qcat /tmp/db_list1\Q)
echo ""
echo "================================="
echo ">>>>> DATABASE: $DBNAME "
echo "================================="
echo ""
echo "... confirming bcp table files for $DBNAME..."

# Get bcp files and confirm tables in each database.
# First, obtain bcp files.

if (-e /tmp/bcptbl1) rm /tmp/bcptbl1
if (-e /tmp/tbl_list2) rm /tmp/tbl_list2

        if ($SW != "t") then
        foreach BCPFQUAL
                (\Qls ${BCPFILEDIR}/${DBNAME}*bcpfile\Q)
        else
        foreach BCPFQUAL
                (\Qls ${BCPFILEDIR}/${DBNAME}.${QUALTBL}.bcpfile\Q)
        endif

        set BT=\Qecho ${BCPFQUAL}:t | cut -d. -f2
                | cut -d. -f1\Q

        echo ""
        echo "---------------------------------"
        echo " * BCP file input for : $BCPFQUAL"
        echo "-------------------------------"

$ISQL $PASSWD << ENDCMDS >/tmp/tbl_list2

if not exists (select name from ${DBNAME}..sysobjects
where name="${BT}") print "TABLEDOESNOTEXIST"

go

ENDCMDS

        grep "TABLEDOESNOTEXIST" /tmp/tbl_list2 > /dev/null
        if ($status == 0) then

                echo ""
                echo "*** Warning.. Table Does Not Exist. ***"
                echo ""
                echo "DATABASE :  $DBNAME"
                echo "TABLE    :  $BT"
                echo ""
                echo "... continuing with next table ... "
                echo ""
                set TMPCHK=1
        else
                        set TMPCHK=0
        endif

$BCP $DBNAME..$BT in $BCPFQUAL $PASSWD  -b $BATCHER -c

# Go to the next table.

        end

        echo ""
        echo "==================================="
        echo ">>>>> BCP file input processing for"
        echo ">>>>> $DBNAME completed."
        echo "==================================="
        echo ""

# Continue processing if no tables were found.

endif

# Go to the next database.

end

# Clean up.
#if (-e /tmp/db_list) rm /tmp/db_list
#if (-e /tmp/db_list1) rm /tmp/db_list1
#if (-e /tmp/table_list) rm /tmp/table_list
#if (-e /tmp/table_list1) rm /tmp/table_list1

Should "#" precede the above "if" statements?

echo ""
echo "... Program Exiting ..."
echo ""

Using BCP_OUT Sample Script to Archive Table Data

----------------------------------------------------------------------------

Summary

This document details the BCP_OUT sample script. You can use BCP_OUT to
automate creating bcp files of table data in all user databases.

For information on automated creation of table data in databases, see the
companion TechNote, "Using the BCP_IN Sample Script to Load bcp Files".

Attributes
 OS: UNIX            Version: 4.9.x, 10.x, 11.x
 Platform: All       Last Revision: 7/22/96
 Product: SQL Server ID: 994

Contents

The BCP_OUT sample script's command lines and routines are intended for you
to use as a model for creating a script that is easy to maintain and
troubleshoot.

     ------------------------------------------------------------------
     Note
     You can copy the sample script, contained in the BCP_OUT file on
     Compuserve Openline. Search on the keyword "bcp".
     ------------------------------------------------------------------

Edit Considerations

In tailoring the sample script to your needs, consider these characteristics
of the script:

   * Only user databases (type="U") qualify. System databases do not
     qualify, including master, model, tempdb, and sybsystemprocs.
   * The bcp batch size during data input is set to 100. To change it,
     search on the string BATCHER.
   * BCP_OUT does not monitor operating system space during the bulk copy
     process. OS commands, such as df, enable you to monitor disk space.

You must review the output carefully for any errors. This script does not
detect all processing failures that may occur at runtime, such as privilege
errors or running out of space within the bcp target directory.

     ------------------------------------------------------------------
     Note
     Sybase Technical Support does not support this script.
     ------------------------------------------------------------------

Debugging Tip

Use the C-shell option "-xvf" instead of "-f" in the line at the top of bcp
files to aid in debugging. For example, "#!/bin/csh -xvf".

Editing Environment Variables

Once you have copied or created the script, edit the environment variables
to reflect your current operating configuration.

To find the section to edit, search on the string ENVIRONMENTAL VARIABLE
SECTION. Each variable is commented to aid you in editing. The following
topics provide additional edit information.

Usage Switches

A usage statement is returned if you invoke BCP_OUT without any switches.
The usage switches are:

   * [-a] to bcp all nonsystem required databases
   * [-s] to bcp select databases from user input
   * [-s <db1> <db2> ...] to bcp the listed databases To perform bcp..out
     for one or more databases, invoke the script with the -s switch. Follow
     this switch with a list of select databases for bcp..out. If you do not
     provide a list of databases, the script prompts you for one.

   * [-t] to bcp a single table
   * [-d] to echo variables in use To check the current environment
     settings, invoke the BCP_OUT script with the -d switch. This switch
     shows the active settings.

BCPFILEDIR Variable

Be sure to point the BCPFILEDIR environment variable to a directory that has
enough free space to store the bulk copy data.

Confirmation Options

Two confirmation options are available:

   * CONF_SELECT, which allows you to confirm the database choices before
     performing bcp commands
   * BCP_OVERWRITE, which prompts you when there are previous bcp files in
     the target directory

Enable these options by changing their default value to 1.

     ------------------------------------------------------------------
     Note
     Changing CONF_SELECT and BCP_OVERWRITE to 1 may cause unexpected
     results in cron jobs.
     ------------------------------------------------------------------

bcp Command Settings

To change the column or row terminators or any other bcp command setting,
edit the bcp command line. Search for the string "$BCP $DBNAME..".

cron Submission

BCP_OUT allows for submission of bcp commands via the UNIX cron command.

For example, you could schedule bcp of the pubs2 database for every Sunday,
one minute after midnight, in cron as follows:

[Image]

     Replace bcp_in with bcp_out

The actual cron command file syntax depends on your UNIX environment.

Run a Test First!

Once you have edited the sample script to reflect your current environment,
we recommend that you test and fine-tune it before using it in a production
environment.

Example Scenario

For example, you could create a test server with two or three small
databases where each of the databases contains a few tables with minimal
data. Let us say that you have a 300 MB database called testdb where:

   * 250 MB of data space is reserved
   * 100 MB of data is actually used
   * Log uses 50 MB

In this example, you need disk space for other applications, so you have
decided to recreate testdb to reflect 250 MB (200 MB data/50 MB log).

The normal dump/load database process does not allow you to load a larger
database into a smaller one. Use your edited version of BCP_IN and BCP_OUT
to transfer the data as follows:

  1. Change the file permissions mode to executable:

     chmod +x bcp_in

     chmod +x bcp_out

  2. Bulk copy out the data using the single database switch:

     bcp_out -s testdb

     For details, see the companion TechNote, "Using the BCP_IN Sample
     Script to Load bcp Files."

  3. Invoke isql and perform the following tasks:

     >> isql -Usa -P

     1> use master
     2> go

     1> sp_dboption "testdb","select into","true"
     2> go

     1> use testdb
     2> go

     1> checkpoint
     2> go

     1> quit

     These tasks do as follows:

   * Drop testdb and recreate it using the new sizes.
   * Recreate all database tables and accompanying schema.
   * Turn on the select into/bulkcopy option.

  1. Bulk copy in the data, using the -s switch:

     bcp_in -s testdb

  2. Dump the database. You can optionally turn off the select into/bulkcopy
     option.

After completing the steps in this example, you are ready to perform
subsequent backups/recovery.

BCP_OUT Sample Script

# -----------------------------------------------------
# ENVIRONMENTAL VARIABLE SECTION
# -------------------------------------------------

# Specify where to place the bcp files.

setenv BCPFILEDIR "/tmp"

# SYBASE and DSQUERY reflect the operating environment
# for your server.

setenv SYBASE "/sybase" setenv DSQUERY "my_servername"

# PASSWD most likely needs to reflect the sa.

setenv PASSWD "-Usa -P "

# Set BCP to the path for the bcp executable, which
# inherits the $SYBASE variable.

setenv BCP "$SYBASE/bin/bcp"

# ISQL reflects a valid path for the isql executable
# and valid command string.

setenv ISQL "$SYBASE/bin/isql $PASSWD -S$DSQUERY"

# If you want a prompt to confirm your database
# choices, set the CONF_SELECT flag to 1.

setenv CONF_SELECT "0"

# If you want a prompt when bcp outfiles already exist
# for the database/table qualifiers chosen, set the
# BCP_OVERWRITE flag to 1. If set to "0" (default),
# existing bcp files will be overwritten.

setenv BCP_OVERWRITE "0"

# Do not modify the umask setting.

umask 0

# -----------------------------------------------------
# SELECT DATABASE PROCESSING SECTION
# -------------------------------------------------

# Set SELECT_DB to "1" for automatic lookup of values
# in sel_dblist. Otherwise, use the default "0".

setenv SELECT_DB "0"

# Do not modify the counter variables NO_TABLES FOUND
# and NOARGS.

set NO_TABLES_FOUND = 0
set NOARGS=1

# -----------------------------------------------------
# STARTING MENU
# -------------------------------------------------

echo ""
echo " ________________________________________"
echo "|          &emsp; &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;|"
echo "| >> SYBASE BCPFILE OUTPUT AUTOMATOR <<&emsp;|"
echo "|________________________________________|"
echo ""

if ($#argv == 0) then
        set SW = "0"
else
        set SW = "\Qecho $argv[1] | cut -d"-" -f2\Q"
endif

if ($SW != "a" && $SW != "s" && $SW != "d" &&
$SW != "t") then
        echo "Usage: "
        echo "------"
        echo "&emsp;&emsp;&emsp;bcp_out [-a]"
        echo "&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;[-s]"
        echo "&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;[-s] <dbname1> <dbnameX> ... "
        echo "&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;[-t]"
        echo "&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;[-d]"
        echo "where"
        echo ""
        echo "&ensp;[a] bcp all nonsystem required databases"
        echo "&ensp;[-s] bcp select databases from user input"
        echo "&ensp;[-s <db1> <dbX> ...] bcp listed databases"
        echo "&ensp;[-t] bcp out a single table"
        echo "&ensp;[-d] echos variables in use"
        echo ""
        exit

else if ($SW == "s" && "$#argv" >= "2") then
        set RCT = "1"
        while ($#argv > $RCT )
                set SELECT_DB = 1
                set VARCT=\Qexpr $RCT + 1\Q
                set SEL_DBLIST=($SEL_DBLIST $argv[$VARCT])
                set RCT = \Qexpr $RCT + 1\Q
        end

else if ($SW == "s" || $SW == "t") then
                set SELECT_DB = 1

else if ($SW == "d") then
                set YCK = 1

                if (! $?SYBASE) then
                        set SYBASE = "<Value not set - edit file>"
                        set YCK = 0
                endif

                if (! $?DSQUERY) then
                        set DSQUERY = "<Value not set - edit file>"
                        set YCK = 0
                endif

                if (! $?ISQL) then
                        set ISQL = "<Value not set - edit file>"
                        set YCK = 0
                endif

                if (! $?PASSWORD) then
                        set PASSWORD = "<Value not set - edit file>"
                        set YCK = 0
                endif

                if (! $?BCPFILEDIR) then
                        set BCP DIRECTORY = "<Value not set - edit
                        file>"
                        set YCK = 0
                endif

                echo " Variable List"
                echo "------------------------------"
                echo ""
                echo "SYBASE        = $SYBASE"
                echo "DSQUERY       = $DSQUERY"
                echo "ISQL          = $ISQL"
                echo "PASSWORD      = $PASSWD"
                echo "BCP DIRECTORY = $BCPFILEDIR"
                echo ""
                exit
endif

# Confirm database choice.

if ($SELECT_DB == 1 && $SW != "t") then
        echo ""
        echo "Selective Database Option Enabled."
        echo ""

        if ($SW == "s" && $#argv < 2) then
                echo "Enter Database Names (one at a time)"
                echo "and terminate list with a <CR>."
                echo ""

                set ct=1
                set CHKIT=1
                set TWOSTRIKES = 1

                if (-e /tmp/qualify) rm /tmp/qualify
                while ($ct == 1)
                        echo -n "DBNAME >> "
                        set QUALIFY=($<)

                if ("$QUALIFY" == "" && $CHKIT == 1) then
                        echo ""
                        echo "*** Warning : please input a valid"
                        echo " Database name <or x to exit>."
                        echo ""
                        set CHKIT=0
                        set TWOSTRIKES = 0

                else if ("$QUALIFY" == "x") then
                        echo ""
                        echo "... Program exiting ..."
                        echo ""
                        exit

                else if ("$QUALIFY" == "") then
                        if ($CHKIT == 0 && $TWOSTRIKES == 0) then
                                echo ""
                                echo "... Program exiting ..."
                                echo ""
                                exit
                        endif
                        set ct=0
                else
                        echo $QUALIFY >> /tmp/qualify
                        set CHKIT=0
                endif
        end

                set SEL_DBLIST=\Qcat /tmp/qualify\Q
        endif

else if ($SW == "t") then
        echo ""
        echo "Single Table Option Enabled"
        echo ""
        set SELECT_DB=1
        echo -n "ENTER DBNAME >> "
        set SEL_DBLIST=($<)
        echo -n "ENTER TABLENAME >> "
        set QUALTBL=($<)
        echo ""

else if ($SELECT_DB != 1) then
        echo ""
        echo "All User Databases Option Enabled."
        echo ""
endif

if (-e /tmp/db_list) rm /tmp/db_list
if (-e /tmp/db_list1) rm /tmp/db_list1

if (-e /tmp/table_list) rm /tmp/table_list
if (-e /tmp/table_list1) rm /tmp/table_list1

# Obtain select database list, if enabled.

if ($SELECT_DB == "1") then
        foreach MANUAL_DB (\Qecho $SEL_DBLIST\Q)

$ISQL $PASSWD << ENDCMDS > /tmp/db_list

use master
go

select ""=name from sysdatabases where name =
"$MANUAL_DB"
go

ENDCMDS

        set CHK_OF_DBS = \Qtail -l /tmp/db_list | cut -d"
        " -f1 | cut -d"(" -f2\Q

if ($CHK_OF_DBS == "0") then
        echo ""
        echo "*** Warning.. Incorrect Database Name. ***"
        echo ""
        echo "DATABASE : $MANUAL_DB"
        echo ""
        echo "Processing of BCP files terminated. Please"
        echo "confirm proper spelling and invoke script"
        echo "again. "
        echo ""
        echo "... Program Exiting ..."
        echo ""
        exit
endif

# Close out the database selection list verification.

end

# Restore the selective database list.

foreach SELDBNAME (\Qecho $SEL_DBLIST\Q)
        echo $SELDBNAME >> /tmp/db_list1
end

# Otherwise, obtain the entire database list.

else

$ISQL $PASSWD << ENDCMDS > /tmp/db_list

use master
go

select ""=name from sysdatabases where name !=
"master" and name != "model" and name != "tempdb" and
name != "sybsystemprocs"

go

ENDCMDS

# Strip out the header, leading spaces, and blank
# lines from the result set.

sed s/---//g /tmp/db_list | sed s/" "//gp | grep -v
"affected" | grep -v ^$ > /tmp/db_list1

# Complete the database selection processing.

endif

        echo "------------------------------------------"
        echo "The following Databases will be candidates"
        echo "for bcp operations:"
        echo "------------------------------------------"
        cat /tmp/db_list1
        echo ""

if ($CONF_SELECT == 1) then
        echo -n "*** Press "y" to continue >>> "
        set p_cont=($<)
        if ($p_cont != "y" && $p_cont != "Y") then
                echo ""
                echo "... Program exiting ..."
                echo ""
                exit
        endif
endif

 foreach DBNAME (\Qcat /tmp/db_list1\Q)

        echo ""
        echo "================================="
        echo ">>>>> DATABASE: $DBNAME"
        echo "==============================="
        echo ""
        echo "... starting table collection for $DBNAME..."

if ($SW != "t") then

# Get the tables in each database.

set NO_TABLES_FOUND = 0

$ISQL $PASSWD << ENDCMDS > /tmp/table_list

use $DBNAME
go

set nocount on
go

select ""=name from sysobjects where type="U"

ENDCMDS

sed s/---//g /tmp/table_list | sed s/" "//gp >
/tmp/table_list1

else
        set NO_TABLES_FOUND = 0

$ISQL $PASSWD << ENDCMDS > /tmp/table_list

if not exists (select name from ${DBNAME}..sysobjects
where type="U" and name="${QUALTBL}") print
"NOTABLEFOUND"
go

ENDCMDS

        grep "NOTABLEFOUND" /tmp/table_list > /dev/null
        if ($status == 0) then
                echo ""
                echo "*** Warning : No Such Table Found. "
                echo " Please try again. "
                echo ""
                echo "... program exiting ..."
                exit
        endif

echo ${QUALTBL} > /tmp/table_list1

endif

echo "... table collection for $DBNAME completed..."
echo ""
echo "---------------------------------"
echo ""

grep "Msg" /tmp/table_list > /dev/null
if ($status == "0") then
        echo "*** Warning.. Error encountered ***"
        echo ""

        grep -n "Msg" /tmp/table_list > /tmp/table_err
        foreach ERRMSG (\Qcat /tmp/table_err |
                sed s/" "//gp\Q)
                set ERRLN=\Qecho $ERRMSG | cut -d":" -f1\Q
                sed -n $ERRLN,\Qexpr $ERRLN + 2\Qp /tmp/table_list
        end

        echo ""
        echo " Continuing to process on next database."
        echo ""

        set NO_TABLES_FOUND = 1
endif

if ($SW != "t") then
        set NUMBER_OF_TABLES = \Qtail -l /tmp/table_list
        | cut -d" " -f1 | cut -d"(" -f2\Q

        if ($NUMBER_OF_TABLES == "0") then
        echo ""
        echo "*** Warning.. no qualifying tables found ***"
        echo " Continuing to process on next database."
        echo ""
        set NO_TABLES_FOUND = 1
endif

endif

if ($NO_TABLES_FOUND != 1) then
        foreach TABLEZ ("\Qcat /tmp/table_list1\Q")

        if ($TABLEZ == "") end
        if ($BCP_OVERWRITE != 1) then
        if (-e $BCPFILEDIR/$DBNAME.$TABLEZ.bcpfile) then
                echo ""
                echo "--------------"
                echo "*** WARNING...
                $BCPFILEDIR/$DBNAME.$TABLEZ.bcpfile EXISTS ***"
                echo "--------------"

                echo -n "Do you wish to overwrite file [y/n]? >>"
                set ans=($<)
                if ($ans != "y" && $ans != "Y") then
                        echo ""
                        echo "... Skipping $TABLEZ..."
                        echo ""
                        end
                        endif
                endif
        endif

# bcp command

        echo ""
        echo "----------------------------------"
        echo " * BCP file creation for : $TABLEZ"
        echo "-------------------------------"

$BCP $DBNAME..$TABLEZ out
$BCPFILEDIR/$DBNAME.$TABLEZ.bcpfile $PASSWD -c

# Go to the next table.

        end

        echo ""
        echo "================================="
        echo ">>>>> BCP file creation for"
        echo ">>>>> $DBNAME completed. "
        echo "================================="
        echo ""

# Continue processing if no tables were found.

endif

# Go to the next database.

end

# Clean up.

#if (-e /tmp/db_list) rm /tmp/db_list
#if (-e /tmp/db_list1) rm /tmp/db_list1

#if (-e /tmp/table_list) rm /tmp/table_list
#if (-e /tmp/table_list1) rm /tmp/table_list1

echo ""
echo "... Program Exiting ..."
echo ""

----------------------------------------------------------------------------
-- 
Pablo Sanchez              | Ph # (415) 933.3812        Fax # (415) 933.2821
pablo@sgi.com              | Pg # (800) 930.5635  -or-  pablo_p@pager.sgi.com
===============================================================================
I am accountable for my actions.   http://reality.sgi.com/pablo [ /Sybase_FAQ ]
