Viewed k times. We need to setup the database so we can connect to it and debug a problem. I did a fresh install of Oracle 9 the version the client is running and the management tools. But nothing shows up in the manager as far as tables in any schema and I'm at my wits end. Improve this question. James James. Thanks, exact problem here. Got an oracle dump and we have to import. Extremely counterintuitive compared to, well, virtually any other database system — namezero.
Add a comment. Active Oldest Votes. Improve this answer. Andrew Andrew 3 3 silver badges 4 4 bronze badges. Post as a guest Name. Email Required, but never shown. The Overflow Blog. Podcast Helping communities build their own LTE networks.
Podcast Making Agile work for data science. Featured on Meta. New post summary designs on greatest hits now, everywhere else eventually. Related Hot Network Questions. Question feed. Stack Overflow works best with JavaScript enabled. Text Editor. Where does SQL Developer stored connections? How do you create a tablespace? How do I open a. DMP file? Opening Memory Dump Files. Open the Start menu. Where are Oracle dump files located?
What is import and export in Oracle? What is a tablespace in Oracle? How do I restore a. What is Oracle Data Pump? What is import and export in database? Make sure the windows service is up and running. How do I export an entire database from Expdp? Similar Asks. This provides an easy way to remap multiple data files in a directory when you only want to change the directory file specification while preserving the original data file names.
In addition, Oracle recommends that the directory be properly terminated with the directory file terminator for the respective source and target platform. Oracle recommends that you enclose the directory names in quotation marks to eliminate ambiguity on platforms for which a colon is a valid directory file specification character. In addition, you have a parameter file, payroll.
However, different source schemas can map to the same target schema. The mapping can be incomplete; see the Restrictions section in this topic. For example, the following Export commands create dump file sets with the necessary metadata to create a schema, because the user SYSTEM has the necessary privileges:.
If your dump file set does not contain the metadata necessary to create a schema, or if you do not have privileges, then the target schema must be created before the import operation is performed.
You must have the target schema created before the import, because the unprivileged dump files do not contain the necessary information for the import to create the schema automatically. For Oracle Database releases earlier than Oracle Database 11g, if the import operation does create the schema, then after the import is complete, you must assign it a valid password to connect to it.
You can then use the following SQL statement to assign the password; note that you require privileges:. Unprivileged users can perform schema remaps only if their schema is the target schema of the remap.
Privileged users can perform unrestricted schema remaps. The mapping can be incomplete, because there are certain schema references that Import is not capable of finding. For example, Import does not find schema references embedded within the body of definitions of types, views, procedures, and packages.
If any table in the schema being remapped contains user-defined object types, and that table changes between the time it is exported and the time you attempt to import it, then the import of that table fails.
However, the import operation itself continues. By default, if schema objects on the source database have object identifiers OIDs , then they are imported to the target database with those same OIDs. If an object is imported back into the same database from which it was exported, but into a different schema, then the OID of the new imported object is the same as that of the existing object and the import fails.
You can connect to the scott schema after the import by using the existing password without resetting it. If user scott does not exist before you execute the import operation, then Import automatically creates it with an unusable password. This action is possible because the dump file, hr. However, you cannot connect to scott on completion of the import, unless you reset the password for scott on the target database after the import completes. Usage Notes. B:C , then Import assumes that A is a schema name, B is the old table name, and C is the new table name.
To use the first syntax to rename a partition that is being promoted to a nonpartitioned table, you must specify a schema name. To use the second syntax to rename a partition being promoted to a nonpartitioned table, you qualify it with the old table name.
No schema name is required. Data Pump does not have enough information for any dependent tables created internally. Only objects created by the Import are remapped. In particular, pre-existing tables are not remapped. Remaps all objects selected for import with persistent data in the source tablespace to be created in the target tablespace.
The target schema must have sufficient quota in the target tablespace. That method was subject to many restrictions including the number of tablespace subclauses , which sometimes resulted in the failure of some DDL commands. Data Pump Import can only remap tablespaces for transportable imports in databases where the compatibility level is set to First, the user definitions are imported if they do not already exist , including system and role grants, password history, and so on.
Then all objects contained within the schemas are imported. Unprivileged users can specify only their own schemas, or schemas remapped to their own schemas. In that case, no information about the schema definition is imported, only the objects contained within it. You can create the expdat. The hr schema is imported from the expdat. The log file, schemas. The service name is only used to determine the resource group and instances defined for that resource group.
The instance where the job is started is always used, regardless of whether it is part of the resource group. In such a scenario, the following would be true:. This example starts a schema-mode network import of the hr schema. Note that there is no dump file generated with a network import.
Specifies whether Import skips loading tables that have indexes that were set to the Index Unusable state by either the system or the user. Other tables, with indexes not previously set Unusable, continue to be updated as rows are inserted. The default value for that parameter is y. If indexes used to enforce constraints are marked unusable, then the data is not imported into that table.
It has no practical effect when a table is created as part of an import. In that case, the table and indexes are newly created, and are not marked unusable. Oracle Data Pump selects all inherited objects that have not changed, and all actual objects that have changed. If this parameter is not specified, then the default edition is used. If the specified edition does not exist or is not usable, then an error message is returned.
Because no import mode is specified, the default, which is schema mode, is used. No dump file is generated, because this is a network import. Editions in Oracle Database Development Guide for more information about the editions feature, including inherited and actual objects.
The SQL is not actually run, and the target system remains unchanged. Any existing file that has a name matching the one specified with this parameter is overwritten. Note that passwords are not included in the SQL file. In the following example, the dashes -- indicate that a comment follows. The hr schema name is shown, but not the password. Therefore, before you can run the SQL file, you must edit it by removing the dashes indicating a comment, and adding the password for the hr schema.
A SQL file named expfull. If you supply a value for integer , then it specifies how frequently, in seconds, job status should be displayed in logging mode. If no value is entered, or if the default value of 0 is used, then no additional information is displayed beyond information about the completion of each object type, table, or partition. This status information is written only to your standard output device, not to the log file if one is in effect.
Specifies whether to import any GoldenGate Replication metadata that can be present in the export dump file. SKIP leaves the table as is, and moves on to the next object. If the existing table has active constraints and triggers, then it is loaded using the external tables access method. If any row violates an active constraint, then the load fails and no data is loaded.
If you have data that must be loaded, but that can cause constraint violations, then consider disabling the constraints, loading the data, and then deleting the problem rows before re-enabling the constraints. For this reason, you may want to compress your data after the load. When Oracle Data Pump detects that the source table and target table do not match the two tables do not have the same number of columns or the target table has a column name that is not present in the source table , it then compares column names between the two tables.
If the tables have at least one column in common, then the data for the common columns is imported into the table assuming the data types are compatible. The following restrictions apply:. If this parameter is specified as y , then the existing data files are reinitialized.
In a table-mode import, you can filter the data that is imported from the source by specifying a comma-delimited list of tables and partitions or subpartitions. By default, table names in a database are stored as uppercase characters. If you have a table name in mixed-case or lowercase characters, and you want to preserve case sensitivity for the table name, then you must enclose the name in quotation marks. The name must exactly match the table name stored in the database. Some operating systems require that quotation marks on the command line be preceded by an escape character.
The following are examples of how case-sensitivity can be preserved in the different Import modes. Table names specified on the command line cannot include a pound sign , unless the table name is enclosed in quotation marks. Similarly, in the parameter file, if a table name includes a pound sign , then unless the table name is enclosed in quotation marks, the Import utility interprets the rest of the line as a comment.
For example, if the parameter file contains the following line, then Import interprets everything on the line after emp as a comment, and does not import the tables dept and mydata:. However, if the parameter file contains the following line, then the Import utility imports all three tables because emp is enclosed in quotation marks:. Some operating systems require single quotation marks rather than double quotation marks, or the reverse; see your operating system documentation.
Different operating systems also have other restrictions on table naming. You must use escape characters to use these special characters in the names so that the operating system shell ignores them, and they can be used with Import. An error would be returned. In such cases, the limit is 4 KB. The following example shows a simple use of the TABLES parameter to import only the employees and jobs tables from the expfull.
During the following import situations, Data Pump automatically creates the tablespaces into which the data will be imported:. In all other cases, the tablespaces for the selected objects must already exist on the import database. It assumes that the tablespaces already exist. Objects that are not editionable are created in all editions. For example, tables are not editionable, so if there is a table in the dump file, then the table is created, and all editions see it.
Objects in the dump file that are editionable, such as procedures, are created only in the specified target edition. If this parameter is not specified, then Import uses the default edition on the target database, even if an edition was specified in the export job. If the specified edition does not exist, or is not usable, then an error message is returned. This parameter is only useful if there are two or more versions of the same versionable objects in the database.
Because no import mode is specified, the default of schema mode will be used. See Oracle Database Development Guide for more information about the editions features. If supplied, this parameter designates the object type to which the transform is applied. If no object type is specified, then the transform applies to all valid object types. This transform parameter affects the generation of the pk or fk constraint which reference user created indexes.
If set to Y , it forces the name of the constraint to match the name of the index. If set to N the default , then the constraint is created as named on the source database. This transform parameter affects the generation of index relating to the pk or fk constraint. If the transform parameter is set to Y , then the transform forces the name of an index automatically created to enforce the constraint to be identical to the constraint name. In addition, the index is created and defined using the default constraint definition for the target database, and will not use any special characteristics that might have been defined in the source database.
Accordingly, if you run an Oracle Data Pump import from a system where no restrictions exist, and you have additional constraints in the source index for example, user generated constraints, such as a hash-partitioned index , then these additional constraints are removed during the import. If set to N the default , then the index is created as named and defined on the source database.
If set to N the default , then archive logging is not disabled during import. After the data has been loaded, the logging attributes for the objects are restored to their original settings. This transform works for both file mode imports and network mode imports.
It does not apply to transportable tablespace imports. If set to N the default , the generated DDL retains the table characteristics of the source object. If set to Y , it directs Oracle Data Pump to create pk , fk , or uk constraints as disabled.
If set to N the default , it directs Oracle Data Pump to create pk , fk , or uk constraints based on the source database status. The IM column store is an optional portion of the system global area SGA that stores copies of tables, table partitions, and other database objects. In the IM column store, data is populated by column rather than row as it is in other parts of the SGA, and data is optimized for rapid scans.
The IM column store does not replace the buffer cache, but acts as a supplement so that both memory areas can store the same data in different formats. If Y the default value is specified on import, then Data Pump keeps the IM column store clause for all objects that have one. When those objects are recreated at import time, Data Pump generates the IM column store clause that matches the setting for those objects at export time.
If N is specified on import, then Data Pump drops the IM column store clause from all objects that have one. If there is no IM column store clause for an object that is stored in a tablespace, then the object inherits the IM column store clause from the tablespace. The object then inherits the IM column store clause from the new pre-created tablespace. This transform is useful when you want to override the IM column store clause for an object in the dump file.
Alternatively you can put parameters in a parameter file. Quotation marks in the parameter file are maintained during processing. See Oracle Database Reference for a listing and description of parameters that can be specified in an IM column store clause. Specifying this transform changes LOB storage for all tables in the job, including tables that provide storage for materialized views.
If Y the default value is specified on import, then the exported OIDs are assigned to new object tables and types. Oracle Data Pump also performs OID checking when looking for an existing matching type on the target database. The assignment of the exported OID during the creation of new object tables and types is inhibited. Instead, a new OID is assigned. Inhibiting assignment of exported OIDs can be useful for cloning schemas, but does not affect referenced objects.
Before loading data for a table associated with a type, Oracle Data Pump skips normal type OID checking when looking for an existing matching type on the target database. Other checks using a hash code for a type, version number, and type name are still performed. If set to Y , it directs Oracle Data Pump to suppress column encryption clauses.
Columns which were encrypted in the source database are not encrypted in imported tables. If set to N the default , it directs Oracle Data Pump to create column encryption clauses, as in the source database. The value supplied for this transform must be a number greater than zero. It represents the percentage multiplier used to alter extent allocations and the size of data files. If the value is specified as Y , then segment attributes physical attributes, storage attributes, tablespaces, and logging are included, with appropriate DDL.
The default is Y. Set this parameter to N to use the default segment creation attributes for the tables being loaded. This functionality is available with Oracle Database 11g release 2 If the value is specified as Y , then the storage clauses are included, with appropriate DDL.
If NONE is specified, then the table compression clause is omitted and the table is given the default compression for the tablespace. Tables are created with the specified compression. If the table compression clause is more than one word, then it must be contained in single or double quotation marks. Also, your operating system can require you to enclose the clause in escape characters, such as the backslash character. For example:. Specifying this transform changes the type of compression for all tables in the job, including tables that provide storage for materialized views.
For the following example, assume that you have exported the employees table in the hr schema. This results in the exclusion of segment attributes both storage and tablespace from the table.
The data files must already exist on the target database system. A question mark? You cannot use wildcards in the directory portions of the absolute path specification. If a wildcard is used, then all matching files must be part of the transport set. If any files are found that are not part of the transport set, then an error is displayed, and the import job terminates.
At some point before the import operation, you must copy the data files from the source system to the target system. You can copy the data files by using any copy method supported by your operating system. If desired, you can rename the files when you copy them to the target system.
See Example 2. Depending on your operating system, the use of quotation marks when you specify a value for this parameter can also require that you use escape characters. Oracle recommends that you place this parameter in a parameter file, which can reduce the number of escape characters that you would otherwise be required to use on the command line. However, the file name portion of the absolute file path can contain wildcards.
Example 1. Example 2. This example illustrates the renaming of data files as part of a transportable tablespace export and import operation. Assume that you have a data file named employees. Using a method supported by your operating system, manually copy the data file named employees. As part of the copy operation, rename it to workers. The actual data was copied over to the target database in step 1. Perform a transportable tablespace import, specifying an absolute directory path for the data file named workers.
The metadata contained in tts. Example 3. Example 4. This example illustrates use of the question mark? For example, a file named myemp. Specifies whether to verify that the specified transportable tablespace set is being referenced by objects in other tablespaces. The check addresses two-way dependencies. For example, if a table is inside the transportable set but its index is not, then a failure is returned and the import operation is terminated.
Similarly, a failure is also returned if an index is in the transportable set but the table is not. This check addresses a one-way dependency. For example, a table is not dependent on an index, but an index is dependent on a table, because an index without a table has no meaning.
Therefore, if the transportable set contains a table, but not its index, then this check succeeds. However, if the transportable set contains an index, but not the table, then the import operation is terminated.
0コメント