Subject: Sybase FAQ: 3/16 - section 2
Date: 1 Sep 1997 06:01:28 GMT
Summary: Info about SQL Server, bcp, isql and other goodies
Posting-Frequency: monthly

Archive-name: databases/sybase-faq/part3
URL: http://reality.sgi.com/pablo/Sybase_FAQ

                   Q2.1: SHRINKING VARCHAR(M) TO VARCHAR(N)
                                       
   
     _________________________________________________________________
   
   Before you start:
   
     select max(datalength(column_name)) from _affected_table_
     
   In other words, _please_ be sure you're going into this with your head
   on straight.
   
  How To Change System Catalogs
  
   This information is _Critical To The Defense Of The Free World_, and
   you would be _Well Advised To Do It Exactly As Specified_:

use master
go
sp_configure "allow updates", 1
go
reconfigure with override /* System 10 and below */
go
use _victim_database_
go
select name, colid
from syscolumns
where id = object_id("_affected_table_")
go
begin tran
go
update syscolumns
set length = _new_value_
where id = object_id("_affected_table_")
  and colid = _value_from_above_
go
/* check results... cool?  Continue... else _rollback tran_ */
commit tran
go
use master
go
sp_configure "allow updates", 0
go
reconfigure /* System 10 and below */
go

  __________________________________________________________________________

                           Q2.2: FAQ ON PARTITIONING
                                       
   
     _________________________________________________________________
   
  Index of Sections
     * What Is Table Partitioning?
          + Page Contention for Inserts
          + I/O Contention
          + Caveats Regarding I/O Contention
     * Can I Partition Any Table?
          + How Do I Choose Which Tables To Partition?
     * Does Table Partitioning Require User-Defined Segments?
     * Can I Run Any Transact-SQL Command on a Partitioned Table?
     * How Does Partition Assignment Relate to Transactions?
     * Can Two Tasks Be Assigned to the Same Partition?
     * Must I Use Multiple Devices to Take Advantage of Partitions?
     * How Do I Create A Partitioned Table That Spans Multiple Devices?
     * How Do I Take Advantage of Table Partitioning with bcp in?
     * Getting More Information on Table Partitioning
       
   
   
    What Is Table Partitioning?
    
   Table partitioning is a procedure that creates multiple page chains
   for a single table.
   
   The primary purpose of table partitioning is to improve the
   performance of concurrent inserts to a table by reducing contention
   for the last page of a page chain.
   
   Partitioning can also potentially improve performance by making it
   possible to distribute a table's I/O over multiple database devices. 
   
      Page Contention for Inserts
      
   By default, SQL Server stores a table's data in one double-linked set
   of pages called a page chain. If the table does not have a clustered
   index, SQL Server makes all inserts to the table in the last page of
   the page chain.
   
   When a transaction inserts a row into a table, SQL Server holds an
   exclusive page lock on the last page while it inserts the row. If the
   current last page becomes full, SQL Server allocates and links a new
   last page.
   
   As multiple transactions attempt to insert data into the table at the
   same time, performance problems can occur. Only one transaction at a
   time can obtain an exclusive lock on the last page, so other
   concurrent insert transactions block each other.
   
   Partitioning a table creates multiple page chains (partitions) for the
   table and, therefore, multiple last pages for insert operations. A
   partitioned table has as many page chains and last pages as it has
   partitions. 
   
      I/O Contention
      
   Partitioning a table can improve I/O contention when SQL Server writes
   information in the cache to disk. If a table's segment spans several
   physical disks, SQL Server distributes the table's partitions across
   fragments on those disks when you create the partitions.
   
   A fragment is a piece of disk on which a particular database is
   assigned space. Multiple fragments can sit on one disk or be spread
   across multiple disks.
   
   When SQL Server flushes pages to disk and your fragments are spread
   across different disks, I/Os assigned to different physical disks can
   occur in parallel.
   
   To improve I/O performance for partitioned tables, you must ensure
   that the segment containing the partitioned table is composed of
   fragments spread across multiple physical devices. 
   
      Caveats Regarding I/O Contention
      
   Be aware that when you use partitioning to balance I/O you run the
   risk of disrupting load balancing even as you are trying to achieve
   it. The following scenarios can keep you from gaining the load
   balancing benefits you want:
     * You are partitioning an existing table. The existing data could be
       sitting on any fragment. Because partitions are randomly assigned,
       you run the risk of filling up a fragment. The partition will then
       steal space from other fragments, thereby disrupting load
       balancing.
     * Your fragments differ in size.
     * The segment maps are configured such that other objects are using
       the fragments to which the partitions are assigned.
     * A very large bcp job inserts many rows within a single
       transaction. Because a partition is assigned for the lifetime of a
       transaction, a huge amount of data could go to one particular
       partition, thus filling up the fragment to which that partition is
       assigned.
       
   
   
    Can I Partition Any Table?
    
   No. You cannot partition the following kinds of tables:
    1. Tables with clustered indexes
    2. SQL Server system tables
    3. Work tables
    4. Temporary tables
    5. Tables that are already partitioned. However, you can unpartition
       and then re-partition tables to change the number of partitions.
       
   
   
      How Do I Choose Which Tables To Partition?
      
   You should partition heap tables that have large amounts of concurrent
   insert activity. (A heap table is a table with no clustered index.)
   Here are some examples:
    1. An "append-only" table to which every transaction must write
    2. Tables that provide a history or audit list of activities
    3. A new table into which you load data with bcp in. Once the data is
       loaded in, you can unpartition the table. This enables you to
       create a clustered index on the table, or issue other commands not
       permitted on a partition table.
       
   
   
    Does Table Partitioning Require User-Defined Segments?
    
    No. By design, each table is intrinsically assigned to one segment,
   called the default segment. When a table is partitioned, any
   partitions on that table are distributed among the devices assigned to
   the default segment.
   
   In the example under "How Do I Create A Partitioned Table That Spans
   Multiple Devices?", the table sits on a user-defined segment that
   spans three devices. 
   
    Can I Run Any Transact-SQL Command on a Partitioned Table?
    
    No. Once you have partitioned a table, you cannot use any of the
   following Transact-SQL commands on the table until you unpartition it:
    1. create clustered index
    2. drop table
    3. sp_placeobject
    4. truncate table
    5. alter table table_name partition n
       
   
   
    How Does Partition Assignment Relate to Transactions?
    
    A user is assigned to a partition for the duration of a transaction.
   Assignment of partitions resumes with the first insert in a new
   transaction. The user holds the lock, and therefore partition, until
   the transaction ends.
   
   For this reason, if you are inserting a great deal of data, you should
   batch it into separate jobs, each within its own transaction. See "How
   Do I Take Advantage of Table Partitioning with bcp in?", for details. 
   
    Can Two Tasks Be Assigned to the Same Partition?
    
   Yes. SQL Server randomly assigns partitions. This means there is
   always a chance that two users will vie for the same partition when
   attempting to insert and one would lock the other out.
   
   The more partitions a table has, the lower the probability of users
   trying to write to the same partition at the same time. 
   
    Must I Use Multiple Devices to Take Advantage of Partitions?
    
    It depends on which type of performance improvement you want.
   
   Table partitioning improves performance in two ways: primarily, by
   decreasing page contention for inserts and, secondarily, by decreasing
   i/o contention. "What Is Table Partitioning?" explains each in detail.
   
   
   If you want to decrease page contention you do not need multiple
   devices. If you want to decrease i/o contention, you must use multiple
   devices. 
   
    How Do I Create A Partitioned Table That Spans Multiple Devices?
    
    Creating a partitioned table that spans multiple devices is a
   multi-step procedure. In this example, we assume the following:
     * We want to create a new segment rather than using the default
       segment.
     * We want to spread the partitioned table across three devices,
       data_dev1, data_dev2, and data_dev3.
       
   Here are the steps:
    1. Define a segment:
       
     sp_addsegment newsegment, my_database,data_dev1
    2. Extend the segment across all three devices:
       
     sp_extendsegment newsegment, my_database, data_dev2
     sp_extendsegment newsegment, my_database, data_dev3
    3. Create the table on the segment:
       
     create table my_table
     (names, varchar(80) not null)
     on newsegment
    4. Partition the table:
       
     alter table my_table partition 30
     
   
   
    How Do I Take Advantage of Table Partitioning with bcp in?
    
    You can take advantage of table partitioning with bcp in by following
   these guidelines:
    1. Break up the data file into multiple files and simultaneously run
       each of these files as a separate bcp job against one table.
       
       Running simultaneous jobs increases throughput.
    2. Choose a number of partitions greater than the number of bcp jobs.
       
       
       Having more partitions than processes (jobs) decreases the
       probability of page lock contention.
    3. Use the batch option of bcp in. For example, after every 100 rows,
       force a commit. Here is the syntax of this command:
       
     bcp table_name in filename -b100
   Each time a transaction commits, SQL Server randomly assigns a new
       partition for the next insert. This, in turn, reduces the
       probability of page lock contention.
       
   
   
    Getting More Information on Table Partitioning
    
   For more information on table partitioning, see the chapter on
   controlling physical data placement in the SQL Server Performance and
   Tuning Guide.
     _________________________________________________________________

           Q2.3: HOW DO I TURN OFF _MARKED SUSPECT_ ON MY DATABASE?
                                       
   
     _________________________________________________________________
   
   Say one of your database is marked suspect as the SQL Server is coming
   up. Here are the steps to take to unset the flag.
   
     _Remember to fix the problem that caused the database to be marked
     suspect after switching the flag. _
     
Pre System 10

    1. sp_configure "allow", 1
    2. reconfigure with override
    3. select status - 320 from sysdatabases where dbid =
       db_id("my_hosed_db") - save this value.
    4. begin transaction
    5. update sysdatabases set status = _-32767_ where dbid =
       db_id("my_hosed_db")
    6. commit transaction
    7. you should be able to access the database for it to be cleared
       out. If not:
         1. shutdown
         2. startserver -f RUN_*
    8. _fix the problem that caused the database to be marked suspect_
    9. begin transaction
   10. update sysdatabases set status = _saved_value_ where dbid =
       db_id("my_hosed_db")
   11. commit transaction
   12. sp_configure "allow", 0
   13. reconfigure
       
System 10

    1. sp_configure "allow", 1
    2. reconfigure with override
    3. select status - 320 from sysdatabases where dbid =
       db_id("my_hosed_db") - save this value.
    4. begin transaction
    5. update sysdatabases set status = _-32768_ where dbid =
       db_id("my_hosed_db")
    6. commit transaction
    7. shutdown
    8. startserver -f RUN_*
    9. _fix the problem that caused the database to be marked suspect_
   10. begin transaction
   11. update sysdatabases set status = _saved_value_ where dbid =
       db_id("my_hosed_db")
   12. commit transaction
   13. sp_configure "allow", 0
   14. reconfigure
   15. shutdown
   16. startserver -f RUN_*
       
   
     _________________________________________________________________

                      Q2.4: HOW TO MANUALLY DROP A TABLE
                                       
   
     _________________________________________________________________
   
   Occasionally you may find that after issuing a _drop table_ command
   that the SQL Server crashed and consequently the table didn't drop
   entirely. Sure you can't see it but that sucker is still floating
   around somewhere.
   
   Here's a list of instructions to follow when trying to drop a corrupt
   table:
    1.

    sp_configure allow, 1
    go
    reconfigure with override
    go

    2. Write _db_id_ down.

    use _db_name_
    go
    select db_id()
    go

    3. Write down the _id_ of the _bad_table_:

    use master
    go
    select id from sysobjects where name = _bad_table_name_
    go

    4. You will need these index IDs to run _dbcc extentzap_. Also,
       remember that if the table has a clustered index you will need to
       run _extentzap_ on index "0", even though there is no sysindexes
       entry for that indid.

    select indid from sysindexes where id = _table_id_
    go

    5. This is not required but a good idea:

    begin transaction
    go

    6. Type in this short script, this gets rid of all system catalog
       information for the object, including any object and procedure
       dependencies that may be present.
       
       Some of the entries are unnecessary but better safe than sorry.

     declare @obj int
     select @obj = id from sysobjects where name =
     delete syscolumns where id = @obj
     delete sysindexes where id = @obj
     delete sysobjects where id = @obj
     delete sysprocedures where id in
            (select id from sysdepends where depid = @obj)
     delete sysdepends where depid = @obj
     delete syskeys where id = @obj
     delete syskeys where depid = @obj
     delete sysprotects where id = @obj
     delete sysconstraints where tableid = @obj
     delete sysreferences where tableid = @obj
     go

    7. Just do it!

    commit transaction
    go

    8. Gather information to run _dbcc extentzap_:

    sp_dboption _db_name_, read, true
    go
    use _db_name_
    go
    checkpoint
    go

    9. Run _dbcc extentzap_ once for _each_ index (including index 0, the
       data level) that you got from above:

    use master
    go
    dbcc traceon (3604)
    go
    dbcc extentzap (_db_id_, _obj_id_, _indx_id_, 0)
    go
    dbcc extentzap (_db_id_, _obj_id_, _indx_id_, 1)
    go


     Notice that extentzap runs _twice_ for each index. This is because
     the last parameter (the _sort_ bit) might be 0 or 1 for each index,
     and you want to be absolutely sure you clean them all out.
   10. Clean up after yourself.

    sp_dboption _db_name_, read, false
    go
    use _db_name_
    go
    checkpoint
    go
    sp_configure allow, 0
    go
    reconfigure with override
    go


   
     _________________________________________________________________

                     Q2.5: WHY NOT MAX OUT ALL MY COLUMNS?
                                       
   
     _________________________________________________________________
   
   People occasionally ask the following valid question:
   
     Suppose I have varying lengths of character strings none of which
     should exceed 50 characters.
     
     _Is there any advantage of last_name varchar(50) over this last_name
     varchar(255)?_
     
     That is, for simplicity, can I just define all my varying strings to
     be varchar(255) without even thinking about how long they may
     actually be? Is there any storage or performance penalty for this.
     
   There is no performance penalty by doing this but as another netter
   pointed out:
   
     If you want to define indexes on these fields, then you should
     specify the smallest size because the sum of the maximal lengths of
     the fields in the index can't be greater than 256 bytes.
     
   and someone else wrote in saying:
   
     Your data structures should match the business requirements. This
     way the data structure themselves becomes a data dictionary for
     others to model their applications (report generation and the like).
     
   
     _________________________________________________________________

                 Q2.6: WHAT'S A GOOD EXAMPLE OF A TRANSACTION?
                                       
   
     _________________________________________________________________
   
     This answer is geared for Online Transaction Processing (OTLP)
     applications.
     
   To gain maximum throughput all your transactions should be in stored
   procedures - see Q8.8. The transactions within each stored procedure
   should be short and simple. All validation should be done outside of
   the transaction and only the modification to the database should be
   done within the transaction. Also, don't forget to name the
   transaction for _sp_whodo_ - see Q9.2.
   
   The following is an example of a _good_ transaction:
   

/* perform validation */
select ...
if ... /* error */
   /* give error message */
else   /* proceed */
   begin
      begin transaction acct_addition
      update ...
      insert ...
      commit transaction acct_addition
   end

   
   
   The following is an example of a _bad_ transaction:
   

begin transaction poor_us
update X ....

select ...
if ... /* error */
   /* give error message */
else   /* proceed */
   begin
      update ...
      insert ...
   end
commit transaction poor_us

   This is bad because:
     * the first update on table X is held throughout the transaction.
       The idea with OLTP is to get in and out _fast_.
     * If an error message is presented to the end user and we await
       their response, we'll maintain the lock on table X until the user
       presses return. If the user is out in the can we can wait for
       hours.
       
   
     _________________________________________________________________

                          Q2.7: WHAT'S A NATURAL KEY?
                                       
   
     _________________________________________________________________
   
   Let me think back to my database class... okay, I can't think that far
   so I'll paraphrase... essentially, a _natural key_ is a key for a
   given table that uniquely identifies the row. It's natural in the
   sense that it follows the business or real world need.
   
   For example, assume that social security numbers are unique (I believe
   it is strived to be unique but it's not always the case), then if you
   had the following employee table:
   

employee:

        ssn     char(09)
        f_name  char(20)
        l_name  char(20)
        title   char(03)

   Then a natural key would be _ssn_. If the combination of __name_ and
   _l_name_ were unique at this company, then another _natural key_ would
   be _f_name, l_name_. As a matter of fact, you can have many _natural
   keys_ in a given table but in practice what one does is build a
   surrogate (or artificial) key.
   
   The surrogate key is guaranteed to be unique because (wait, get back,
   here it goes again) it's typically a monotonically increasing value.
   Okay, my mathematician wife would be proud of me... really all it
   means is that the key is increasing linearly: i+1
   
   The reason one uses a surrogate key is because your joins will be
   faster.
   
   If we extended our employee table to have a surrogate key:
   

employee:

        id      identity
        ssn     char(09)
        f_name  char(20)
        l_name  char(20)
        title   char(03)

   Then instead of doing the following:
   

   where a.f_name = b.f_name
     and a.l_name = a.l_name

   we'd do this:
   

   where a.id = b.id

   We can build indexes on these keys and since Sybase's atomic storage
   unit is 2K, we can stash more values per 2K page with smaller indexes
   thus giving us better performance (imagine the key being 40 bytes
   versus being say 4 bytes... how many 40 byte values can you stash in a
   2K page versus a 4 byte value? -- and how much wood could a wood chuck
   chuck, if a wood chuck could chuck wood?)
   
  Does it have anything to do with natural joins?
  
   
   
   Um, not really... from "A Guide to Sybase..", McGovern and Date, p.
   112:
   
     The equi-join by definition must produce a result containing two
     identical columns. If one of those two columns is eliminated, what
     is left is called the natural join.
     
   
     _________________________________________________________________

                   Q2.8: MAKING A STORED PROCEDURE INVISIBLE
                                       
   
     _________________________________________________________________
   
   Perhaps you are trying to not allow the buyer of your software
   _defncopy_ all your stored procedures. It is perfectly safe to delete
   the syscomments entries of any stored procedures you'd like to
   protect:

sp_configure "allow updates", 1
go
reconfigure with override /* System 10 and below */
go
use _affected_database_
go
delete syscomments where id = object_id("_procedure_name_")
go
use master
go
sp_configure "allow updates", 0
go

   I believe in future releases of Sybase we'll be able to _see_ the SQL
   that is being executed. I don't know if that would be simply the
   stored procedure name or the SQL itself.
     _________________________________________________________________

             Q2.9: SAVING SPACE WHEN INSERTING ROWS MONOTONICALLY
                                       
   
     _________________________________________________________________
   
   If the columns that comprise the clustered index are monotonically
   increasing (that is, new row key values are greater than those
   previously inserted) the following System 11 dbcc tune will not split
   the page when it's half way full. Rather it'll let the page fill and
   then allocate another page:

dbcc tune(ascinserts, 1, "_my_table_")

   By the way, SyBooks is wrong when it states that the above needs to be
   reset when the SQL Server is rebooted. This is a permanent setting.
   
   To undo it:

dbcc tune(ascinserts, 0, "_my_table_")

   
     _________________________________________________________________

                 Q2.10: HOW TO COMPUTE DATABASE FRAGMENTATION
                                       
   
     _________________________________________________________________
   
Command


dbcc traceon(3604)
go
dbcc tab(production, _my_table_, 0)
go

Interpretation

   A delta of one means the next page is on the same track, two is a
   short seek, three is a long seek. You can play with these constants
   but they aren't that important.
   
   A table I thought was unfragmented had L1 = 1.2 L2 = 1.8
   
   A table I thought was fragmented had L1 = 2.4 L2 = 6.6
   
How to Fix

   You fix a fragmented table with clustered index by dropping and
   creating the index. This measurement isn't the correct one for tables
   without clustered indexes. If your table doesn't have a clustered
   index, create a dummy one and drop it.
     _________________________________________________________________
Q2.11: Tasks a DBA should do...

----------------------------------------------------------------------------
I was asked by a poster to list what a DBA's tasks ought to be. Here's what
I believe (this will evolve as time progresses):
                                 DBA Tasks

           Task              Reason                   Period

                                         If your SQL Server permits,
                        I consider       daily before your database
   dbcc checkdb,        these the        dumps. If this is not possible
   checkcatalog,        minimal dbcc's   due to the size of your
   checkalloc           to ensure the    databases, then try the
                        integrity of     different options so that the
                        your database    end of, say, a week, you've run
                                         them all.

   Disaster recovery    Always be
   scripts - scripts    prepared for
   to rebuild your SQL  the worst. Make
   Server in case of    sure to test
   hardware failure     them.

   scripts to
   logically dump your
   master database,
   that is bcp the      You can
   critical system      selectively
   tables:              rebuild your
   sysdatabases,        database in      Daily
   sysdevices,          case of
   syslogins,           hardware
   sysservers,          failure
   sysusers,
   syssegments,
   sysremotelogins

   dump the user
   databases            CYA              Daily

   dump the
   transaction logs     CYA              Daily

   dump the master                       After any change as well as
   database             CYA              daily

   System 11 and        This is the
   beyond - save the    configuration    After any change as well as
   $DSQUERY.cfg to      that you've      daily
   tape                 dialed in, why
                        redo the work?

                                         Depending on how often your
                                         major tables change. Some tables
                                         are pretty much static (e.g.
                                         lookup tables) so they don't
   update statistics                     need an update statistics, other
   on frequently        To ensure the    tables suffer severe trauma
   changed tables and   performance of   (e.g. massive
   sp_recompile         your SQL Server  updates/deletes/inserts) so an
                                         update stats needs to be run
                                         either nightly/weekly/monthly.
                                         This should be done using
                                         cronjobs.

   create a dummy SQL
   Server and do bad
   things to it:        See disaster
   delete devices,      recovery!        When time permits
   destroy
   permissions...

   Talk to the          It's better to
   application          work with them   As time permits.
   developers.          than against
                        them.

   Learn new tools      So you can       As time permits.
                        sleep!

   Read c.d.s           Passes the       Priority One!
                        time.

----------------------------------------------------------------------------

                   Q2.10: HOW TO IMPLEMENT DATABASE SECURITY
                                       
   
     _________________________________________________________________
   
   This is a brief run-down of the features and ideas you can use to
   implement database security:
   
Logins, Roles, Users, Aliases and Groups

     * sp_addlogin - Creating a login adds a basic authorisation for an
       account - a username and password - to connect to the server. By
       default, no access is granted to any individual databases.
     * sp_adduser - A user is the addition of an account to a specific
       database.
     * sp_addalias - An alias is a method of allowing an account to use a
       specific database by impersonating an existing database user or
       owner.
     * sp_addgroup - Groups are collections of users at the database
       level. Users can be added to groups via the sp_adduser command.
       
       A user can belong to only one group - a serious limitation that
       Sybase might be addressing soon according to the ISUG enhancements
       requests. Permissions on objects can be granted or revoked to or
       from users or groups.
     * sp_role - A role is a high-level Sybase authorisation to act in a
       specific capacity for administration purposes. Refer to the Sybase
       documentation for details.
       
  Recommendations
  
   Make sure there is a unique login account for each physical person
   and/or process that uses the server. Creating generic logins used by
   many people or processes is a _bad idea_ - there is a loss of
   accountability and it makes it difficult to track which particular
   person is causing server problems when looking at the output of
   sp_who. Note that the output of sp_who gives a hostname - properly
   coded applications will set this value to something meaningful (ie.
   the machine name the client application is running from) so you can
   see where users are running their programs. Note also that if you look
   at master..sysprocesses rather than just sp_who, there is also a
   program_name. Again, properly coded applications will set this (eg. to
   'isql') so you can see which application is running. If you're coding
   your own client applications, make sure you set hostname and
   program_name via the appropriate Open Client calls. One imaginative
   use I've seen of the program_name setting is to incorporate the
   connection time into the name, eg APPNAME-DDHHMM (you have 16
   characters to play with), as there's no method of determining this
   otherwise.
   
   Set up groups, and add your users to them. It is much easier to manage
   an object permissions system in this way. If all your permissions are
   set to groups, then adding a user to the group ensures that users
   automatically inherit the correct permissions - administration is
   *much* simpler.
   
Objects and Permissions

   Access to database objects is defined by granting and/or revoking
   various access rights to and from users or groups. Refer to the Sybase
   documentation for details.
   
  Recommendations
  
   The ideal setup has all database objects being owned by the dbo,
   meaning no ordinary users have any default access at all. Specific
   permissions users require to access the database are granted
   explicitly. As mentioned above - set permissions for objects to a
   group and add users to that group. Any new user added to the database
   via the group then automatically obtains the correct set of
   permissions.
   
   Preferably, no access is granted at all to data tables, and all read
   and write activity is accomplished through stored procedures that
   users have execute permission on. The benefit of this from a security
   point of view is that access can be rigidly controlled with reference
   to the data being manipulated, user clearance levels, time of day, and
   anything else that can be programmed via T-SQL. The other benefits of
   using stored procedures are well known (see Q8.8). Obviously whether
   you can implement this depends on the nature of your application, but
   the vast majority of in-house-developed applications can rely solely
   on stored procedures to carry out all the work necessary. The only
   server-side restriction on this method is the current inability of
   stored procedures to adequately handle text and image datatypes (see
   Q8.12). To get around this views can be created that expose only the
   necessary columns to direct read or write access.
   
Views

   Views can be a useful general security feature. Where stored
   procedures are inappropriate views can be used to control access to
   tables to a lesser extent. They also have a role in defining row-level
   security - eg. the underlying table can have a security status column
   joined to a user authorisation level table in the view so that users
   can only see data they are cleared for. Obviously they can also be
   used to implement column-level security by screening out sensitive
   columns from a table.
   
Triggers

   Triggers can be used to implement further levels of security - they
   could be viewed as a last line of defence in being able to rollback
   unauthorised write activity (they cannot be used to implement any read
   security). However, there is a strong argument that triggers should be
   restricted to doing what they were designed for - implementing
   referential integrity - rather being loaded up with application logic.
   
Administrative Roles

   With Sybase version 10 came the ability to grant certain
   administrative roles to user accounts. Accounts can have sa-level
   privilege, or be restricted to security or operator roles - see
   sp_role.
   
  Recommendations
  
   The use of any generic account is not a good idea. If more than one
   person requires access as sa to a server, then it is more accountable
   and traceable if they each have an individual account with sa_role
   granted.
     _________________________________________________________________
-- 
Pablo Sanchez              | Ph # (415) 933.3812        Fax # (415) 933.2821
pablo@sgi.com              | Pg # (800) 930.5635  -or-  pablo_p@pager.sgi.com
===============================================================================
I am accountable for my actions.   http://reality.sgi.com/pablo [ /Sybase_FAQ ]
