How to handle an external logical model

L.S.

The OERA assumes that when data comes in from the ESL (Enterprise Services Layer), it is already converted to the logical model used by the BLL (Business Logic Layer).

However, what should happen, according to the theory of the OERA model, if, for example, an XML comes in through the ESL, and is structured and formatted according to the logical model of an external business. In effect, the external logical model has to be converted to the internal physical model and after that converted to the internal logical model.

This can be handled in 2 ways:
1. the XML enters the BLL, just like a dataset from the presentation layer would. The BLL sends a request to the DLL (Data Logic Layer) where the input is the XML file and the output is a dataset that conforms to the internal logical model, thereby hiding the ETL processes from the BLL. Next the DLL converts the external logical model to the internal physical model. However, data is not yet stored in a datastore, but in temp-tables, because the BLL still has to validate the data in the internal logical model. Next, as it always does, the DLL converts the internal physical model to the internal logical model, which is then returned to the BLL.
2. The ETL process/conversion is done in the BSL (Business Services/Interface Layer). This raises a problem however, because from the BSL you should only be able to access BE/BT and BP, you should not be able to access the DLL, since skipping layers introduces fixed dependancies and this is one of the reasons it is not allowed in the OERA. Furthermore, if you code the ETL process where the internal physical model is converted to the internal logical model in the BSL, you can get redundant coding that may already be present in the DLL.

I've asked Mike Ormerod from Progress and he favors the second way.

Can anyone shed some light here.

Met vriendelijke groet / Best regards,

Will van Beek
Progress Software


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
ptfreed's picture

So where do we draw the line?

Jamie's comments about a Common Data Model are exactly what I was grasping for in my earlier post.

You say:
> In my view the Data Logic Layer is not only responsible for supplying the
> logical model to the Business Logic Layer, but for all data conversions
> necessary to achieve this.

That's reasonable, I suppose. But does that mean that Sonic data transformations are now part of your Data Logic Layer? How does that fit into OERA?

You also say:
> It may not be straightforward to convert the external logical model to the
> internal version; there may be data conversions necessary that require the
> conversion to the internal physical model first.

I must confess that I don't get this. Can you give me an example? In my mind, the only time that data needs to be transformed into the physical data model is when it is being written to the datastore. It may be that 2-byte and 4-byte integers are logically useful in transforming the data. If so, they should be a part of the appropriate logical model.

This doesn't change anything, really, since you still want all transformations to occur in the DLL. But it might change the way you look at the problem.

In truth, data transformations are going on all the time. Perhaps the data is coming in via ATM, then converted to TCP/IP, and then converted to ASCII text, BCD, binary data, etc. Up to that point, you can consider it the network layers, and ignore it. But then this stream of bytes comes into Progress, where it is stored in a Progress data structure -- which involves all sorts of data transformations.

I think of your external data problem in much the same way. Maybe you need a function to convert EBCDIC to ASCII, or German to English, or high-endian to low-endian. These are not the province of the Business Logic Level; they are mechanical functions and should be performed at a higher level.

Still, some data conversions are not so easily swept under the rug since they require understanding of the underlying data, data structures, and data relationships. Picking the correct place to perform these conversions involves as much art as it is science. But I'm inclined to perform them at as high a level as possible; I don't want my business logic to have to worry about the data models of every other company I work with.

Having said all this now gives me a clue as to where I would "draw the line." If external data can be usefully converted mechanically -- without requiring any internal data lookups or specific knowledge of my business practices -- then I'm inclined to consider this conversion a service that should be called before the data ever hits my business logic. If not, then the data needs to come deeper into the system to be massaged.

It seems to me that this addresses your concern about introducing unwanted dependencies between levels. The conversion doesn't depend upon anything inside my system (certain not the physical storage mechanisms). It only depends upon the published interfaces of my business partners -- which have nothing at all to do with my business logic.

This fits well within the spirit of the OERA, even if does add some dotted lines to the pretty diagrams. (I don't know that it does, actually; I'll leave that question to the experts.)


Oh, those damn assumptions

Hi all,

thanks for all your responses. It has helped me to redefine some of my assumptions:

1. The Business Logic Layer:
- is basically responsible for all validation. (Validation, based on constants, may be deferred
to the Presentation Layer for purposes of performance or interaction)
- only accepts Progress datasets. Either:
- an empty dataset for filling/refreshing data
- a filled dataset, containing beforedata, for validation and storage
- should have no knowledge about where the data originates (Presentation Layer or Enterprise
Service/Interface Layer).
- is the only one that can access the Data Logic Layer.

2. The Data Logic Layer
- is responsible for converting the internal physical model to the internal logical model and
vv. So, I distance myself from my previous assumpion that the DLL is responsible for all
data conversions.

3. The Business Service/Interface Layer
- is responsible for forwarding the request to the appropriate BE/BT/BP. It should operate
according to the proxy pattern.

So, data entering the Business Service/Interface Layer through the Enterprise Service/Interface Layer:
- should be converted/tranformed into the format of the internal logical model, before any
BE/BT/BP is accessed.
- can only reach the Data Logic Layer through a BE/BT/BP, otherwise a layer would be skipped.

4. Based on all your input, I propose a new component, called a "Business Adapter Component" which:
- operates in the Business Service/Interface Layer (visually located at the bottom of this
layer).
- It can only make calls to the BLL through the Business Services/Interface Layer in which it
operates. Because the API’s function as a level of indirection to the BLL, the Business
Adapter should adhere to this and should not make any direct calls to the BLL.
- Its workings are based on the strategy pattern. That is:
- there is one interface, In this case, the API in the Business Service/Interface Layer,
accepting character data, for example an XML.
- The API forwards the call to a generic factory class, thereby conforming to the proxy
pattern nature of the Business Services/Interfaces layer.
- This generic factory class operates according to the fatory method and is part of the
"Business Adapter Component". I could be called something like a "Business Adapter Manager"
or "- Redirector", because it redirects the call to the appropriate Business Adapter
implementation.
- The Business Adapter implementations itself are based on the adapter pattern, since they
convert an external model to the internal logical model format and back. Because the BLL
should not be able to distinguish between data coming from the Presentation Layer or the
Enterprise Services Layer, the data should enter the BLL as if it came from the
Presentation Layer.
So, Any Business Adapter:
- has knowledge about the external model
- has knowledge about the internal logical model, and therefor knows how to make the
conversion
Any Business Adapter performs the following operations:
a. validates whether the input conforms to the representation of the external model (see 5)
b. creates a datastructure in the form of a progress dataset with pertaining temp-tables
c. data from the XML, if any, is put in the dataset
- when fetching data, the assumption is that information, on which data to fetch, should
be available and is also converted by the Business Adapter into a format acceptable by
the Business Services/Interface layer.
- when storing data, the XML should contain enough information to construct after-table
and before-table data, with their corresponding rowstates.
d. The dataset is now ready and from here it’s 'business as usual' because the pertaining API
in the Business Service/Interface Layer is called, which eventually returns a filled or
updated dataset.
e. The data structure and its contents are converted back to an XML based on Business
Adapter's knowledge of the external model.

5. The Common Data Model (not using DataXtend-SI ;) in the OERA context is narrowed down to an
external representation, eg. in the form of XSD’s, of the datastructures needed. These
encompass the logical model, the physical model and all external models.
- The Data Logic Layer uses the CDM as a mapping between the physical and logical model;
dynamically fetching and updating data.
- The Business Adapter uses the CDM as a mapping between an external model and the logical
model.
This implies that the CDM is located in the only vertical OERA layer, The "Common
Infrastructure Layer".

Is this sound; looking forward to your responses
regards,
Will


That's the question

you said:
"That's reasonable, I suppose. But does that mean that Sonic data transformations are now part of your Data Logic Layer? How does that fit into OERA?"

No, definitely not. these transformations are made before the data reaches the Enterprise Services layer. That's the easy case, because the data is already converted to the internal logical (or physical) model.

you said:
"I must confess that I don't get this. Can you give me an example?"

I was afraid you might ask that ;)
Suppose an XML comes in with the following data:

(I am omitting the GT and LT symbols, because there are interpreted as HTML)
OrderLineData
Customer 25 Customer
Order 2460 Order
Line 4 Line
Quantity 13 Quantity
OrderLineData
This means that the quantity of this particular orderline has changed to 13.
Suppose my logical model can only validate the change in this orderline if other fields are added, like item prices and creditlimits. So, access to the physical data model is necessary to replenish this data.
My thoughts were to first convert to the physical model, but in temp-table format, because it still has to be validated by the BL. This could then be converted to the logical model, because this is one of the responsibilities of the DL.
However, the more I think of it, the more I get the feeling that I'm led by a fallacy: the fact that the physical data is accessed, doesn't mean that the XML data first needs to be converted to this model. So it's a good thing you asked this Phil; I 'll abandon the conversion to the physical model and agree with you that the only time that data needs to be transformed into the physical data model is when it is being written to the datastore.

you said:
"... These are not the province of the Business Logic Level; they are mechanical functions and should be performed at a higher level."

I completely agree.


ptfreed's picture

I say, do it in the Business Logic

The data validation you describe needs to occur at the level of business logic.

But this has nothing to do with putting the data into a data transport structure (the Common Data Model). Converting to CDM should need little more than assigning the data into fields with standard names and format. It may involve some text-to-numeric conversions, or transfer the XML to a temp-table or ProDataset, or even some minor sanity checks. But you will NOT be able to determine whether this is a valid fillable order till you get down to your business logic.

Now that I say this, I realize that I'm probably just repeating what you already know. You're really asking how to fill in those missing fields, like the unit price. Is that it?

I suppose that this could be done by the ESL, since it seems somehow "clean" to package up the data all shiny and pretty before dropping it into the system. But I now agree with your original assessment -- this violates the spirit of OERA. In order to even recognize that there is missing data, you need access to the business logic. Once the BLL receives the external data (in a Common Data Model format), it can call a Please-Fill-In-The-Blanks service to get whatever it needs.

Alas, I now go appear to be going against what other (more experienced) heads have suggested. I hope that this is because the problem is now more clearly stated. But I guess we'll soon see.


ptfreed wrote: > I suppose

ptfreed wrote:
> I suppose that this could be done by the ESL, since it seems somehow "clean" to package up the data all shiny and pretty before dropping it into the system.

If this was being done with a GUI, how would you do it? I'm guessing you'd say that the GUI would have all the information that it needs to send the "complete logical data set" to the BL. Where did the GUI get that information? From the BL of course.

The implementation for the ESL should reflect that of the GUI. I would see it happening something like this for an order change:

1. Actor* wants to change an order
- Actor is the user for GUI or for ESL, a message arriving from the external system
2. Client-side** retrieves existing logical data set from BL
- Client-side is either the GUI or the ESL
3. Client-side converts from Actor-format*** into logical model
- Actor-format is the fields on the screen or the incoming message format
4. Client-side does any client-side relevant validation
- Eg. for GUI, check date ranges validity without needing to go to BL
- Perhaps other functionality like re-calculating prices so that they're visible to the user before the order change is submitted. Note that complex price calculation may be a BL task, so a call to the BL is: necessary; desirables, and; completely "OERA compliant"
- Such validation would not usually be necessary for the ESL as direct and immediate pre-BL processing feedback is generally not required. (This depends on the granularity of the services being exposed to the ESL. Usually, external services are kept very coarse grained to reduce "chatter" during interactions, simplifying the use of the API's and improving performance)
5. Client-side forwards the updated logical model to BL
6. BL does all relevant business validation
- Eg. the date validation mentioned above, recalculating total price, etc
7. BL saves the data via the DL
8. BL reports status of save to client
9. Client reports success to Actor. Eg. GUI visualises "Data Saved", ESI returns "Message processing success", etc

ptfreeed wrote:
> I suppose that this could be done by the ESL, since it seems somehow "clean" to package up the data all shiny and pretty before dropping it into the system. But I now agree with your original assessment -- this violates the spirit of OERA.

As outlined above, I find this to be 100% in the spirit of OERA, as long as layers are not "skipped" during the interactions. ESL should never invoke the DL directly. That said, I believe things like date validation may be extracted into a separate implementation and directly invoked from GUI, ESL, BL, DL, etc as required. The validation in this case is independent of the layers and indeed should be directly invoked, so that the code is not duplicated. Not that if your GUI is HTML/Java/.Net, you may still need to duplicate some code in JavaScript/.Net/Java (if you don't want to make that validation itself available as a service).


ptfreed's picture

I think we're on the same page

jamie.townsend said:
> The implementation for the ESL should reflect that of the GUI

I agree. I think of the ESL as a collection of user interfaces. Some of the users are not people. (I could call them Actors, but who ever heard of an Actor Interface? Besides, the acronym AI is taken.)

jamie.townsend said:
> If this was being done with a GUI, how would you do it? I'm guessing
> you'd say that the GUI would have all the information that it needs
> to send the "complete logical data set" to the BL.

That works if there is limited need for real-time validation. But I usually want my interfaces to be as interactive as possible. If there aren't enough widgets in the San Francisco warehouse, I may want to issue a warning before the page is complete. Or I may want some form of smart field completion. In such cases there will be multiple interactions between the GUI and the BLL -- perhaps after each field, or even each keystroke. For an interactive screen, that kind of overhead is usually acceptable. (As long as the data entry bottleneck is the typing speed and not the lack of system responsiveness.)

In one sense, there is actually business logic being written into the UI here. How does the interface know that certain fields require additional processing or validation? Isn't that itself a form of business knowledge? I suppose that the "correct" thing to do is to have the interface pass *all* of the data to the BLL "just in case" -- but this is a compromise I'm usually willing to make.


tamhas's picture

Not really

In one sense, there is actually business logic being written into the UI here. How does the interface know that certain fields require additional processing or validation?

The point is not that the UI knows that a field needs validation, but that the UI knows that a value change in that field is a meaningful event that someone might like to know about so it sends a message to provide notification. What the BL then does with it is up to the BL.


RE: I think we're on the same page

In principle, all business logic validation should be performed in the BL. Additional logic can be applied in other layer like the UI layer (or DL eg. enforce data integrity in the DB) as required. If the logic is identical, it should be written in one place (in one language) and referenced as far as possible/practical. In the case of UI logic, this may be some UI validation or an asynchronous call through to the the BL to improve user experience. In the example that you mentioned, I'm imagining a BL lookup of warehouse inventory levels. This could either be done by submitting the whole order to the BL for "pre-validation" or just access a re-usable service for looking up inventory levels for an individual widget (a service that would no doubt have other use-cases beyond placing an order).

A couple of things to note: the logic for "pre-validating" a whole order would most likely be identical to the logic for validating the complete order, and; that logic may well just invoke the re-usable service for inventory levels.

Through this, a couple of things become clear:
- an interactive UI will usually have far more interaction with the the BL than the equivalent ESL approach
- an interactive UI will need a finer grained set of services available
- the finer grained services used by the UI will result in a more complex implementation for the interaction
- once a set of data from the UI is complete, it should be handled by exactly the same BL that handles the ESL
- the ESL client will have fewer API calls to implement, but will still need to deal with a potentially vast number of return result/errors

ptfreed wrote:
> In one sense, there is actually business logic being written into the UI here. How does the interface know that certain fields require additional processing or validation?
> Isn't that itself a form of business knowledge? I suppose that the "correct" thing to do is to have the interface pass *all* of the data to the BLL "just in case" -- but
> this is a compromise I'm usually willing to make.

In general, the BL should only need the fields contained in the logical model and the logical model should only contain the fields required for the service. If there are fields that are specific to the User Interface or Enterprise Service layer, I would argue that the BL shouldn't know about them, otherwise your BL is not truly separated from how it is used. At least from a logical perspective this true, however you may decide (perhaps for technical reasons) that in specific cases you take an alternate approach.


tamhas's picture

I would agree that

I would agree that validation is a BL task. But, this is a question of how one draws the line. I.e., the service interface should really only be concerned about converting the message from its external form to an internal form and then passing it off to a task in the BL which would then validate the data prior to handing it to the factory to create the corresponding object.


tamhas's picture

The key here is that the

The key here is that the message is not the object. A change in quantity to an order line is an action to be performed on an order line object with all the validations, check, balances, and consequences associated with making that change and the related changed to the order object. The message is just the instruction for what to do. Sometimes the message is providing the information necessary to build a new object, but that is just a particular type of message.


RE: How to handle an external logical model

In a broader Enterprise Architecture approach, there would usually be a Common Data Model in play that is used as the basis for all data exchange between all applications. With this approach, each application needs to know how to translate from it's logical data model into the Common Data Model and back.

When the application receives external data in the Enterprise Service Layer, it does so in the form of the external data source. If a Common Data Model is in use, only one API must be exposed - that using the Common Message Model (which in simple terms, is just a way to structure the CDM into a message). If no Common Data Model (or more correctly Common Message Model) is in use, multiple external API's must be implemented.

Either way, it's the job of the Enterprise Service Layer to transform the external model into the application's logical model. Ideally, it should do this without going through the physical model first and without directly invoking Data Layer objects directly. In a Common Data Model world, the data would usually be complete for the target application before it is sent there. In other cases, partially complete data (from the application perspective) could be received.

During the transformation to the application's logical model, the Enterprise Service Layer should make the appropriate calls to the BL (via the SI of course), which in turn may call the DL, to get the data it needs to complete the transformation to the application's logical model.

My 0.02 CHF,
Jamie


Common Data Model

Hi Jamie,

If you speak about a Common Data Model, do you mean the Physical Model, consisting of the entities, their relations and constraints ?

regards,
Will


Common Data Model

A Common Data Model is an Enterprise Wide model of entities, attributes and relationships. The majority of systems will store something about some entities. Some applications might store everything about a few entities. Usually, it is unnecessary and indeed impractical for a single application to store everything about every entity. The Common Data Model is usually used to transport and work with data on an application independent level.

By contrast, an application's logical model will contain the data that it needs. This will be a sub-set of the Common Data Model, often simplified for use in that application. The application's BL uses the logical model to do things within the application, but at a logical business level. Anything that goes beyond the application typically goes back out through the Enterprise Service Layer to: gather further information, inform other applications of updates, etc.

Some further reading and example Common Data Model standards (that are always tweaked) can be found here: http://web.progress.com/en/Product-Capabilities/common-data-model.shtml....


thanks Jamie

.


ptfreed's picture

CDM is more logical than physical

> If you speak about a Common Data Model, do you mean the Physical Model,
> consisting of the entities, their relations and constraints ?

I can't answer for Jamie, of course, but I think that this would be far too much information to have available to the ESL. Some sanity checking may be performed on the incoming data -- particularly if you have a bidirectional interface that can reject input because of (say) missing or malformed fields -- but I would keep it to a minimum. Real data validation should happen in conjunction with the business logic.

Generally, a Common Data Model would have more in common with the business model than with the physical one. Think of the CDM as a series of simple buckets with standard names and properties (this is an integer. this is a string...) that you can slot the incoming data into. Or as I said in an earlier post, think of it as an interface. "Regardless of my input, I promise that the output will look like *this*"

The code that converts to CDM, wherever it is housed, needs to understand your business partner's published data specification pretty well. It doesn't need to know much about internals -- neither yours nor theirs.

An aside: It sounds like you are considering the relations and constraints to be a part of the physical model. Certainly they can be, if you have validations and triggers in your database. But I tend to think of them as part of the business logic. Having them at the physical layer only serves as an integrity check against bad programming; one hopes that problem data will never get that deep.

Alas, this often means programming the checks in two places -- which can be error prone and performance draining. But I always start out this way, and only tweak if there is a vital business reason to do so. Among other things, This makes it easier to document and track those noxious tweaks....


Thanks Phil and Thomas, for

Thanks Phil and Thomas, for your swift responses.

In my view the Data Logic Layer is not only responsible for supplying the logical model to the Business Logic Layer, but for all data conversions necessary to achieve this. It doesn’t matter whether the original or source data is accessed from a fixed location on a server, from the cloud or pulled/pushed in from the enterprise services layer, it’s still a datasource, and datasources and their conversions should in my opinion be handled by the DLL.

You both seem to agree on standardizing the incoming data (MDP) without the use of the Data Logic Layer. In my view the service/interface layer of the BL should be set up according to the proxy pattern, where the API’s of the interface layer are just placeholders that control access to the actual BE/BT/BP (Business Entity, -Task,-Procecss). If the interface layer should perform an ETL then it becomes in fact an adapter. So I would prefer a Business Adapter in the BL above the BE, BT and BP, but below the BL's interface layer, that sends the MDP to the DL, that returns the internal logical model version of the MDP as a dataset. This would then call a BE/BT/BP and supply the dataset for business validation, after which it is stored.

your thoughts please.
regards and TIA
Will


tamhas's picture

I would distinguish between

I would distinguish between two contexts.

1. I have a BL requirement for some inventory data in a order service and make a call to the DL of the order service to obtain that data. The DL knows that the data is not local and goes across the bus to the inventory service to obtain the data.

2. An EDI message comes in to the order service and thus needs to be validated, turned into an order, and persisted.

One is DL; the other is SI.

And, yes, the EDI message should result in building an Order object which is then persisted like any other Order object. There should not be a separate persistent path for the EDI order. The same is true for an Order created in the UI.


> In my view the Data Logic

> In my view the Data Logic Layer is not only responsible for supplying the logical model to the
> Business Logic Layer, but for all data conversions necessary to achieve this. It doesn’t matter
> whether the original or source data is accessed from a fixed location on a server, from the
> cloud or pulled/pushed in from the enterprise services layer, it’s still a datasource, and
> datasources and their conversions should in my opinion be handled by the DLL.

I think it depends on how you view the data.

If the incoming data is viewed to be "owned" by the application, accessing that data would be done via the DL. A Data Access Object in the DL would read the external data and convert it into the logical model. This DAO would only know how to convert between the external physical model and the logical model.

If some operations need to be performed on the data in it's logical model format, it may be pushed up the the BL for processing. Otherwise, it may be passed back directly into a different DAO for persisting in the application's usual physical store (ie the database).

In short, if the incoming data is viewed to be "owned" by the application, the Enterprise Service Layer doesn't enter into the picture.

If the incoming data is not "owned" by the application, the Enterprise Service Layer will play a major role as discussed in my other post.

--
Jamie


thanks for the input. Could

thanks for the input.
Could you give an example of data that comes in through the Enterprise Services Layer, but is not owned by the application.

Would this be data that doesn't fit in the Common Data Model ?

regards,
Will


Any data exchanged with

Any data exchanged with another company should come in via the Enterprise Services Layer. This 3rd party data may be delivered in the (Enterprise Wide) Common Data Model (which is not to be confused with any single application's logical model).

In larger SOA projects, we usually see data being exchanged with 3rd parties and other applications in a concise format for the data exchange. The data would then be converted into the Common Data Model so that it can be passed to any application throughout the organisation. It would then enter each application via it's Enterprise Services Layer in the (Enterprise Wide) Common Data Model.

There is a Progress SOA Reference Model, which I unfortunately can't quickly locate on the public pages on Communities, however the OpenGroup have a similar SOA Reference Architecture which can be found here: https://www.opengroup.org/projects/soa-ref-arch/uploads/40/19713/soa-ra-... In particular Page 18 shows a diagram for layering and abstraction in a bigger SOA sense. The OpenEdge Reference Architecture is part of the Operation Systems layer in that picture.


tamhas's picture

I am a little unsure about

I am a little unsure about your precise meaning in some cases because there are terms in you vocabulary which I am sure have special meaning to you, but which may or may not have the same meaning to me.

Which said, I think I am with Mike here in that one of the purposes of a service interface is to take a message in the external format and convert it into a form which works internally.

E.g., suppose we have a BL Order object. In the DL we have an object which interfaces with the datastore and which creates and consumes message data packets that are sent to and received from the BL. In the BL, we have a factory that consumes these MDP and turns them into Order objects. When we save an object, it sends an MDP to the DL for persistence and reconciliation (i.e., optimistic locking and such). The service interface receiving Order data from the bus or whatever should be creating the same MDP to send to the DL to be consumed by the same factory. The main difference being that the service interface needs to validate the data prior to sending it to the BL.


So, if I understand you

So, if I understand you correctly, The Service Interface of the BL has a factory class that standardizes and validates the MDP before it is supplied to a factory in the DL for storage.

What do you mean by validating the MDP ?
I gather, it is not BL validation, since your validation is done before the data is send to the BL.

regards,
Will


tamhas's picture

One is not validating the

One is not validating the MDP, but rather the data. I.e., if you receive an EDI Order, you should verify that it is for a valid customer before building an order object.


ptfreed's picture

Just a question

Others here are more qualified than I to have an opinion on this. But I have a question on your premise.

Why does the external data need to be translated to the physical model first? Strictly from a design standpoint, it usually makes more sense to translate between logical models.

Perhaps we need the concept of an external logical model (an interface, if you prefer), that some layer (the ESL? the BSL?) can use to standardize incoming data. This model may be the same as the physical model or the internal logical model, though it need not be either of these. There may even be several layers of conversions -- one, say, to convert text data from German to English, and a second to convert the data into a standard structure. None of this requires access to deeper layers of the OERA model.

(As is often the case when I post over my head, I have a feeling that I'm missing something; I confess that my understanding of the OERA is not as deep as yours. Sorry. :-)


RE: just a question

"Why does the external data need to be translated to the physical model first? Strictly from a design standpoint, it usually makes more sense to translate between logical models."

It may not be straightforward to convert the external logical model to the internal version; there may be data conversions necessary that require the conversion to the internal physical model first.
Conversion between internal physical to internal logical model is a task of the Data Logic Layer, so we do need to access the deeper levels of the OERA.

regards,
Will