Yaron [LaserOp]

Members
  • Posts

    59
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Yaron [LaserOp]

  1. Are you the device asking for trace data, and receiving this? Because in this case I'd really recommend, during initialization, to ask for trace if format "1" (ASCII absolute), instead of "4" (Packed binary). This would be both a lot more readable (or, well, just plain readable) for you personally, as well as easier to code for (assuming you're using a language with something that has basic text/string processing capabilities). As for what this tells you, first, the TRCFMT line, tells you (besides that it's for the right lens): 1. That data (i.e. radius distances in the R records in this case) will be in the packed binary format (hardest to read/use, really no reason to do that if you don't have to, or need to be able to work with extremely low bandwidth or memory). 2. That you will get 512 radius distances in the R records (main data of the frame traces. Each value is radius distance from the center position of the frame). 3. That the angular distance between the R values are equal, so the total 360° will be spread equally, effectively giving you a radius over a sequence with about 0.7° rotation between them. 4. That the frame trace you're receiving is from tracing a frame, rather than a demo value or a pattern. (probably not actively useful for you beyond basic verification you're not receiving something unexpected) And, of course, the R record, which has the actual shape of the frame trace, as a sequence of the radius distance from the center along the circumference of the frame shape. Your main issue with reading this is, again, that "4" Packed Binary format, which is not particularly human readable. You can see the details of the format in the DCS, section (in 3.14 ver03) 5.4.15.4 "Packed binary format". Though, really, read all of 5.4 "Tracing Datasets" , for a cover on how to read frame trace data. The main records you care about are TRCFMT and R. Good idea to also look at A for when the angle distances aren't "E"/equiangular but "U" (rare but happens) or "C" (so far very very rare). Depending on what your device is/does, it's possible you don't care about sag data ZFMT/Z/ZA at all, otherwise behavior is generally similar to the main trace data. Notice that if you're not reading files with pre-made trace data (in which case what it has is what you get), but are instead asking a VCA Host for the frame trace, then you can on initialization specify which formats you want/support. Most modern LMS will probably be able to convert/interpolate whatever format trace they have to whatever you asked for.
  2. Hi Ryan, At first glance, everything does seem to be fine. The trace data you sent is formatted correctly without doing anything unusual, and there even isn't any obvious mismatch (at least at the "eyeballing it" level) with related records like HBOX/VBOX/CIRC if the machine tests them. So, while I'm not working for WECO so can't be sure what/how this machine is doing, a few guesses on what might be causing it the problem: 1. Maybe it's only expecting "Frame" traces and doesn't know how to handle the "P" type for Patterns (fifth field of TRCFMT record) ? The behavior should be exactly the same, it shouldn't technically matter, but if it's explicitly matching "F" there for supported types, it would obviously decide it doesn't support what's traced so can't use it. Try sending as F instead of P to test if this is it (e.g. instead of "TRCFMT=1;512;E;R;P" send "TRCFMT=1;512;E;R;F", and same for left side) . 2. Maybe it can't handle the ZFMT records? It's just ZFMT=0, which is the default when ZFMT isn't sent, so ignoring it is the correct behavior. But if it's expecting to just not get it (which normally it probably really wouldn't in this case), and if it's expecting to get the two sides of the trace as a block (which it technically shouldn't expect to, but in practice is what would practically always happen), then the ZFMT in the middle would be an unsupported label in the middle of the trace data. Try just not sending ZFMT instead of ZFMT=0 to see if that's it. 3. Is it actually expecting a trace of this type? Are you sending frame trace of "1;512;E" because it was agreed upon during initialization, it was in the list of formats in presented as supported, and was the format the host sent it as agreed upon? Or did the machine specify a different format/resolution, without including this one in the list of supported formats? Your communication log is using an already negotiated Request ID and doesn't include the auto-initialization, so I can't see how initialization went, but it's something you should verify. Possibly the machine is finicky on that. Ideally it should be fine with accepting anything that it knows how to handle, but technically there's one format agreed upon so only one it should officially receive (even if in practice most machines probably don't really care and it's just extra work for both sides).
  3. What you need is mostly covered in the DCS, which is generally pretty good. But I do agree (and from past experience being about where you are now) the DCS documentation is much better as a reference, where you already know what you're looking for, than as a guide or tutorial. So let me give you a general overview here, and pointers to the sections of the DCS doc where you can look things up in more details. (the latest v3.14 draft was posted in these forums very recently, so you have access to it, I'll give section numbers from it, but also titles which are usually kept even if things move around between versions). Beyond that, it would probably be better if you open a different thread to follow up, either asking for more details on the general process, or about a specific part/label you're not sure about (so that better on the Q&A forum rather than the discussion one. That could help get attention from other people, who may follow these forums, who didn't see connecting to the simulator through terminal as relevant, but may see actual DCS usage as something to assist in. (On one hand, I'm a pretty good source for your case, since I'm also exactly in the position of having worked on connecting our devices to hosts on the lines, so everything I tell you is true and usable, at a level where it works and worked in real life. But there are people here who may have experience with more than one device type, or who are from the Host/LMS side so have a much better general understanding of how things should work and how they happen with a much larger range of devices, so may be of better/more help if they do step in). So at the basic level, the Host (usually the/an LMS) indeed manages the line in the lab, and all machines/Devices connect to it to get job data where a job reaches them (I'm ignoring here anything beyond machines on the line during job production, that need job data. If your device also needs to measure/test something and modify job data that other devices down the line will need, there's more to do, but this is still the same basis). For the very basics, do go over the DCS doc sections 3 "Terms and Definitions" and 4 "Overview" , 5.1 "Requirements"->"Records & Datasets" , and 6.1 "Packets"->"General" . Your device knows what data (which Labels) it needs to process jobs (e.g. if you need to know the weight of the lens then you'll need WEIGHT). So it needs to let the Host knows which Labels to send it. For that there is an Initialization process (Start with 7 "Sessions" for more general overview in 7.1 "General", and 7.2 is "Initialization Sessions" which is this. Realistically you want autoinitialization / auto-format initialization, which is the recommended way to work, and is widely supported). For this you tell the Host you want to do initialization, and if it supports it, you send it information about your device (in case there's any specific changes/tailoring they want to do) and a list of the Labels you'll want (including here WEIGHT, which you'll send as part of the D=<labels you want> set in the DEF block). It would then give you a Request ID representing your set of definitions. Then, when you need data for a job, you send a job data request, with the job number (obviously) and that Request ID from the initialization. You can keep using the same Request ID until you need to change what you're asking for (and so do initialization again), or the Host tells you it doesn't recognize the ID and you need to do initialization again (in which case, well, do initialization again). This works for real Hosts, and also with the Simulator. For the Simulator the records/labels it has for a job are in text files it keeps (and you can create/edit) for each job. So whatever job data you want it to send you, of course put it in the file in advance. Notice that the files contain the labels for the job data, but nothing beyond that, so no reserved/control characters like <FS> and such, and no STATUS label which the Simulator (like other hosts) will send based on the situation (that's how you know if you get job data correctly, no data for this job, errors, need initialization, etc..., take a look at A.4 "Status Codes" and the associated table A.24 "Status Codes" ) . The documentation should show you example both for the initialization process (with REQ=INI packets), and getting jobs (with REQ=<the id> and JOB=<the job you want> ) .
  4. Hi Art, Haven't worked with WinHex myself, but it should be more than fine. In any case the two functions you need to prepare a data file and transfer the full block are: 1. If you do this more than a few times, search&replace that will allow you to search for a text string (e.g. "<fs>" ) and replace with the hex value (e.g one byte of 1C) . If you already have a bunch of files then of course bonus points for search&replace across files. 2. The ability to select and copy the text data, rather than the hex data. The second one, the ability to select the text, is the important one to transfer to a terminal like PuTTY. Right-clicking the PuTTY window will paste the clipboard content. so if you select the correctly formatted data, copy and then paste, you get the whole block in there. This should be very basic, I expect all editors to be able to allow you to select and copy the text in an edited file, correctly copying also the non-textual character. If not, heck, just save the modified file, open it in regular notepad, and copy from there (notepad won't show the control character, but it will copy them correctly). For search&replace, as I wrote I'm not familiar with WinHex. If it lets you do that, and if it lets you do that across multiple files, then great, you're set. One hex editor I work with and know can do that is HHD's Hex Editor Neo, that supports searching the open file for text but replacing with bytes/binary data. It can also do that across multiple files, but that's a paid feature not available in the free version. Alternately, a surprisingly good option may be Notepad++ , it's a text editor, but the search&replace feature does support using character values in a regex replacement. And it works both for the open file, and for search&replace in files. So you can do a regex match for "<fs>" replacing with "\x1c" , which will work fine. It will preview the control characters in open files, and it will copy the data correctly. The line separators are already fine in the text you already have (unless you're using something explicitly listing "<cr><lf>" in which case that really will be too much work to do manually), so you probably just need to deal with <fs>, <rs> and <gs>. One issue it will leave you, is that to send from PuTTY you'll need to press the Enter key, which will send another <CR><LF> set that the simulator won't expect. As an alternative to PuTTY, that would be easier to use in this scenario (you're basically wanting a clean/raw TCP/IP terminal/console, that can be handled easier), try WindTerm. Much better for this purpose than PuTTY. You can configure a TCP/IP connection, and it allows having multiple open "Sender" tabs at once that you can easily switch between to send different types of data. You can paste to it from somewhere else, add files directly, and it also allows to edit both text and hex characters inline. The interface is a bit clunky, but works well once you get the hang of it.
  5. Michael, I answered Artiom with more info and suggestions on the other thread, that he opened and also added a screenshot of the attempted connection showing the data sent and what the DCS Simulator received, in the other forum ("DCS Discussions"). Not much advantage to having this going in two places, I'm adding here a link to the other original discussion if anyone sees this here and wants to follow, but if as an admin/mod you have a better way to merge these (or just remove this thread if nobody is following it specifically? Are there people who follow the Q&A forum and not the Discussions forum?), then probably a good idea to do so.
  6. Hi Art, There are a few issues. Both in what you're actually sending, and in that PuTTY isn't really an ideal tool here since the DCS protocol, while mostly textual, has specific control characters it of course doesn't handle automatically. To start, looking at the communication log in your screenshot, you're starting your connection as a "Telnet" connection, which involves signaling that the simulator doesn't expect (because it isn't a Telnet server). So about the first 2/3 of the connection you're seeing is just PuTTY trying to initiate a Telnet connection, and the DCS simulator not understanding it. This one is the easiest problem, when you set the connection in PuTTY choose for the Connection Type the "Raw" option, and not Telnet. This brings us to the second issue, the DCS control characters ("Reserved characters"). To start (sorry for the pun), the FS (Start of message) character is not actually a text string reading "<FS>" . That <xx> format is an indication code of a single control character (in this case hex code 0x1C, decimal 28). So from the preview of the PuTTY window in your screenshot, it's supposed to be starting a new DCS message with a TRCFMT record, the decimal codes of the individual characters in sequence, if it was correct, would be 28 84 82 67 70 77 84 61 49 . The 28 is <FS> followed by the text (then followed in a real packet by 13 10 , which is <CR><LF> for End of Line). Instead you're just tying "<FS>" as a string to PuTTY, so the DCS Simulator doesn't receive the <FS> character. If you'll look at the error log this is the sequence it doesn't recognize of 60 70 83 62. It just gets each of the individual characters. Which is wrong, these are, again, just the human-readable representation of the relevant control character. So to use PuTTY, you'll need to be able to to send the control characters correctly. If you have full/large data you want to enter, without having to type it manually, edit the file in advance to match (use a hex editor, or a text editor that allows to search&replace with individual non-textual character codes, do you need recommendations?). You can then copy from the file and paste into PuTTY. That should make it send the full correct sequence. Again, make sure to replace all the "readable" control character representation, with the actual key codes. If you're fine with typing manually, all of these characters also have a way to send them directly with a Control Key combination (Ctrl + key). For example you can send the <FS> character on the PuTTY terminal by holding Ctrl and pressing the "\" key. If you check the VCA DCS document, in Table 1 "Reserverd Characters" you can see a list of all of the needed characters, all with the readable text representation, the hex and decimal values, and the control key code. For control keys, if you want to type in the terminal rather than copy from a file, notice that holding Ctrl in the table is represented with a "^" character prefix. So for <FS> in the table it says ^\ , which means hold Ctrl while pressing the "\" character. The exception is the <RS> key, shown as ^^, For Ctrl+^ , easiest on most keyboards by having Caps-Lock on and pressing Ctrl and "6" above the keyboard (not the keypad). Also for end of line / record, just use ^J (<LF>, 10, 0x0A) , it will be enough for the DCS Simulator to separate records, and won't cause PuTTY to send an incomplete packet as it will do on CR/^M . I hope this helps. Though to be honest, this is fine if you're working on developing software to communicate with a Host, and want to practice the protocol to be sure you understand the behavior correctly. But if you're working on testing the actual VCA records you're sending/receiving, this will give you a lot of effort overhead, and you're better off using already existing software from some device/machine/vendor used to send/receive data, and just test your data with them...
  7. Hi Ryan, Can you post the actual data/trace that you're sending? Assuming there is some actual issue (whether it's something incorrect/off in the trace data itself, or just something correct but a little unusual that possibly the edger isn't expecting so doesn't know how to handle), seeing what the edger receives would be... helpful... to try and guess what causes it a problem. So please provide at least a trace that the edger complained about, though ideally the full communication log between the LMS and server on a job with a frame trace that caused the problem (in case the issue is maybe not with the trace itself but a mismatch with some other related label).
  8. Ah, and one more quick typo. On table A.13 " Process Status (“INF”) Packet ", the table lists the label STATUS as STATUSL . I don't see it on 3.11 (correctly STATUS), but I do see it on 3.12.
  9. Hi, We added support to REQ=INF reports on our machines a long while ago. And recently due to a talk with a lab I've taken a look at our implementation, and noticed that for some reason we were sending MODEL to the Host. Which seemed a little odd, it doesn't really make sense to me for it to be there, and it doesn't make sens to me anywhere without DEV and maybe VEN records. So I've checked the VCA DCS document and and section states "The INF request type will contain only request type, job, status, and (optionally) CRC and model ID records." . This is the same for v3.13 draft 30 PDF, which is the most recent VCA DCS document I have, section 7.3.5 on page 92 (by TOC) / 95 ("physical"), and for 3.06 (earliest I think I have) section 6.3.6 on page 44 (TOC) / 50 (Physical). Looking for "model ID" I see it mentioned just a few other times in the document, but always for MODEL (i.e. in a section grouped with what are or should be DEV and VEN). Even though in other places MODEL is just referred to as "model" or "machine model" (which is also how it's called in its definition). I'm pretty certain that the intent here was to specify MID (which we sent as well, despite that "only", since it was usable by Hosts we worked with), not MODEL. Which makes even more sense considering MID is explicitly stated to be optional on requests, similar to CRC. So: Is this indeed a typo/mistake in the DCS, and this should be MID? I should stop sending MODEL, and the DCS should be corrected for the next 3.13 version? Or am I understanding this correctly, MODEL should be here? In this case, can you please explain why? Also, how important is that "only"? This specific interaction started after seeing a communication sample to a Host from a fairly serious vendor where the sample didn't include MID, but did include both VEN and MODEL. Beyond probably being superfluous, is this "wrong"?
  10. Hi Robert, Extremely late response on my side as well, I missed your response at the time, sorry about that. 1. At the moment there are only three sections there (3.11, 3.12, 3.13), so the work of reordering (if it's agreed to do so) is just switching around 3.11 and 3.13. While it could be a lot of work in theory, in this case it seems to me to be very little work. That said, it's very little work, so of course I don't mind doing it myself. Except, I will need the original editable document to do so, not the currently published PDF. And I expect that the concern over compatibility between the editors (Word 365 that the PDF was made with, and my OpenOffice/LibreOffice) and wanting to verify correctness of everything, especially on a document with active change tracking, may cause Paul (or whoever would otherwise do this) a lot more work and concern than doing to quick select>cut>paste operations? 2b. If this was written as a recommendation of maximum length, so recommending terseness, I'd actually fully agree. It's acceptable for a Standard to recommend using short short values without enforcing them. It's even fine to require a maximum length with some specific exclusion criteria (value is made up of words and extending to finish the last word? there are multiple values which are very similar so a few more characters are needed to clarify the differentiation? Something else?). But this is presented as a requirement/specification of maximum length, combined with a complete waiving of that maximum length whenever it is wanted without any specific criteria. "There is no maximum length but it's recommended to limit to X characters" can make perfect sense. "There is a maximum length of X characters unless the length is longer" not so much.3 3. And same point, I agree that allowing for overrides is fine, but that requires specifying explicit cases for the overrides. If the override is completely free and unlimited, then the constraint is meaningless, and should be either waived or re-written as a recommendation. "You must/can't do X unless Y" is fine, except where Y is "you want to" instead of something a lot more specific.
  11. The reason to send to a conveyor DO data different from the default would be to have it match what is sent to a specific machine/device that the conveyor would need to consider. (e.g. If any and all devices on a line receive DO=L then also a generic device should receive DO=L and not DO=B) But in that case having a different device type for conveyor won't help in any way, the conveyor itself could be "at any part of the lab process". Knowing it's a conveyor rather than a generic device doesn't help knowing if it's a conveyor feeding a generator, engraver, polisher, etc... So a new device type won't change anything, you'd still need to tie it to a specific location (device it stands before). Which I guess the way to do will be to have different MID set for each conveyor, and configure the host to send different DO depending on MID? In this case, though, since the decision on the host is in practice entirely dependent on MID, then what is the benefit of having a "CVY" device type to check CVY+MID when you can just as well check for DNL+MID? The device type doesn't add or change anything since MID should probably be specific to a single device anyway. Or, to avoid doing the MID tailoring, since in this scenario the Host is already differentiating based on device type, it may be more practical to have the conveyor present itself as the relevant device. Conveyor before the generator/s can say it's GEN, the one before the polisher/s can say it's FSP (?), and so on. That should give the conveyor full knowledge on what the machines it feed should or shouldn't do. If the possibility of modifying the standard is considered, though, I think it would be a better idea not to add another DEV option, but instead of expand the set of DO___ labels. That would directly do the work of letting a generic device get the important data for all machines, as well as allow DO to be properly handled as a single job-status information instead of having the Host modify it per device. There currently exist such variations for some of the machines, like DOENG for engraver (i.e. If it's a two-lens job that shouldn't be engraved you can, and should, send for such cases "DO=B" and "DOENG=0;1" instead of changing to "DO=L" especially for the engraver), DOCOAT for coating, and a few more, but there are (as far as I know) no equivalents for other devices such as polisher or generator (DOGEN does something else). Expanding those seems like a good way to disentangle the general what-in-the-job-should-be-processed of DO (and let it be the same for all supporting devices without having to adjust it per device), with a very straight-forward and accessible way to indicate what individual devices should do.
  12. @SJO - ENGLOC seems to have not been changed from previous versions, it's still type Literal rather than changed to Literal[;] as you suggested. Are you looking at the same document that has been posted here?
  13. A few comments, on draft ID 30. None I see as extremely critical, except for the last (which is therefore very much an objection rather than a comment/suggestion/question). I'm basing this on the Revision History list, assuming anything changed in 3.13 so far is listed there, I did not read through all the document: Regarding the Revision History itself - I think the revision order (newer on top) is the wrong choice here. This ordering makes sense for thing like blogs and web on-page-updates. But in a part of a closed document older to newer would make more sense to me. Literal data - The updated definition (in 3.3 and 5.1.7.8) is a substantial improvement, and much clearer. That said: - 1 - There is some difference between the more general definition in 3.3.2 and the more specific one in 5.1.7.8 . The one significant thing that was done in 5.1.7.8 and I think would be an important addition to 3.3.2 as well, is that all possible values are to be enumerated in the record definition. Maybe change 3.3.2 from "...having permissible values specified in the standard" to something like "...having permissible values enumerated in the standard" ? - 2 - Since the standard is now clearer that all values are enumerated, is there a point in specifying maximum length? Especially when 5.1.7.8 both specifies maximum length of 12 and allows the definition of each usage to go beyond 12? This is effectively no length limit when defining a Limited record field (since it is explicitly allowed to override 12), and there is no point in specifying maximum length as an issue for anyone using a Limited record field (if all values are enumerated then specifying length is meaningless, maximum length is always the length of the longest enumerated value). Maximum length in the definition of Limited should either be waived completely, or be specified in a way that doesn't allow a record definition to override it. (I'd prefer the former, but either option is better, and internally consistent, compared to the current effectively "there is a maximum length and it can be ignored by everyone") Reference Coordinate System for Backside Engraved Lenses - Previous issues with terminology and usage aside, just a quick note that 5.2.3.1 has an internal reference error, I assume to the related Figure 2 below the section, stating "Error! Reference source not found." in the middle of the paragraph. Removed New DEFAULT label - The Revision History lists having added a new Label DEFAULT. Which is not actually added anyhwhere in the document. I see that it was there in an earlier draft, so I assume it was decided to drop the label, and it's just the Revision History which needs to be adjusted to match. ENGMARK coordinate system - Talked about it in the past, the changed definition of the ENGMARK coordinate system origin is highly problematic. From the previous revision I see that the definition changed from trying to define it on black center to trying to define it on block center, this is irrelevant for the practical objections and has the exact same problem. Again, there are plenty of labs who, for quite a lot of years now, rely on the fact that the coordinate source for engraving (using ENGMARK or ENGMASK records) is ER, not SB. (The Reference point for an Engraving operation being the Engraving Reference). Engraving is being decentered/offset from the Block center in plenty of labs by using SBBC__ + BCER__ records. Using the exact same set of job records, a change from 3.12 to 3.13 should absolutely not cause the engraving to move. It even more certainly should not cause the engraving to move to where the lab does not expect, or want, it to be. This definition expansion isn't clarifying things, it's changing things. And in a way that will have clear and significant and unwanted impacts on labs. I absolutely don't see any benefit whatsoever to doing it, just many downsides. Why change the origin of an already widely-in-use label?
  14. In theory you're correct. In practice, is this worth making a change? It doesn't serve any purpose: In current/new DCS version, ENGLOC indicates on what side the feducial (±17mm) marks would be located, for anything that tries to look for them after engraving. And HASENG indicates if there even are such marks. So the only case where ENGLOC needs to be chiral is if it's F for one lens and B for the others. Which I'd think would never be the case. (right?) Otherwise, there is no confusion or ambiguity. In your sample it would be ENGLOC=B , meaning that for this job the marks are supposed to be on the back side, and that only the left lens has those marks (which are on the back side). "which eye has marks?" and "where are the marks located?" can be (and currently are) independent questions. There is an N value for ENGLOC only for historical reasons, I assume, from before its purpose/meaning was changed (in 3.11 I think?), before there was HASENG. Maybe before there was DOENG (I don't have at hand older DCS documents to verify)? And just as a convenience factor to have a "nicer" value where no lenses have marks.
  15. I want to mention again that the options aren't only to go ahead with the increase as-is in 3.13, vs making a newer major version 4.0 . There are some possible modifications to the change that will prevent any breaking behavior, and so make it technically eligible for 3.13. Please check my last comment on the discussion thread, listing some options that occurred to me, including one that came from Paul. I also think a lot of people maybe didn't read the discussion, or notice the details of why I claim it's a breaking change, as for example: So, instead of yet another explanation, an example to illustrate the issue. This will show device->host initialization request, and host->device job data. These are simplified for the purpose, and to make it more readable this handles as if the previous limit was 20. Starting sitatuion, initialization, device to host: If the device and host were using DCS 3.07, and the device changed to 3.12, nothing happens. If the host changed to 3.12, nothing happens. Everything keeps working as is. Nobody cares what is the version on the other side, because it shouldn't matter when not using new fields/labels. But if the device changes to 3.13m, this happens automatically: And now, if the host is not up to 3.13, it's possible for it to either stop with an error because it doesn't have a valid D at all, or to just return ADD, SVAL, TIND, and LIND, completely ignoring the other three labels that the device needs. Most up do date hosts probably support receiving unlimited length anyway, but they don't have to, and not all hosts in all labs are up to date and well maintained. Not being able to process length beyond the official max is entirely 100% fine and in compliance with the DCS. Which is why this is a breaking change. The device was updated, there was no configuration changes, no new labels or new fields are used. But things stop working. From the other side, same thing, assume this is job data: If the host upgrades from 3.07 to 3.12, nothing happens. If the devices upgrades from 3.07 to 3.12, nothing happens. There wasn't any configuration changes, no new labels and fields are used, all is fine. But if the host changes to 3.13: And now, if the device isn't 3.13, it's possible for it to stop with an error, because it doesn't have the radius length it needs. So, again, breaking change, because merely updating the host, without any configuration changes, and without trying to use any new labels or fields, can cause some devices to stop working, for jobs and data where they worked perfectly fine with before. As with the host, it's likely that it won't cause any issue with most devices, who may not have a length limit for receiving. But they're allowed to, so some can have a problem with the limit. And no other changes along the way impacted them, at all, but this will automatically cause them to stop working properly.
  16. Christian, Main part of your comments, re usage of ER and re BE: When working with the same axis system, there is no difference between "where X should be place" and "where X was placed", for the same X. So "this is where the center point of the engraving is", would be identical to "what should the center point for engraving be". And, given how the Reference Points are all connected, it's very usable that way. Specifically in our use case (and I'd image other engravers), If something has to be engraved, and what the machine physically knows (common use case, engraving on the back side during surfacing) is where the block is located, to find where the center of the engraving should be (which, again, same as it would be after it's done), we generally use SBBC + BCER. (And, as a side note, notice that while it's true the usage labels include ERNR and ERDR, indicating ER usable as source/origin, there is and was also BCER, indicating ER usable as target to get to from somewhere else) I also think you're probably wrong when you write that ER is "most often" used for already marked points as reference for something else. It's entirely possible it was like that 20, maybe even 10, years ago. But at this point, and for a very long time now, the overwhelming majority of labs do engrave/mark lenses as a part of the process. And so, again, if that engraving should be centered on ER when it's done, then it should be done around ER. We're using BCER to position engraving for about since ENGMASK was first introduced (3.06, I think? around early 2007?). And it has been used (i.e. with non 0/? values) by various labs (and LDS vendors), as the main way to indicate decentration for engraving. That's pretty common, established, and industry-wide acceptable, use. And one which is usually done on the back side of the lens. For marking on the front side: I absolutely agree that at this point it's rarely done, which is why FB is rarely needed for engraving. But it is done in some places, and it's being done more and more. So, I think the question of "when being asked to engrave on the front side, where should I engrave, if I'm not explicitly told to engrave relative to the frame?" is a very valid question (together with its counterpart of "When we want an engraver to engrave on the front side, how do we tell it where to engrave, if we don't want it relative to the frame?" ) . And I think it's better to get a standard acceptable answer early, instead of just letting engraver vendors all do whatever they personally think is best, and try to sort it out later. I don't quite get the attitude of "no need to decide that before people are starting to use it a lot, lets allow wide usage with no rules and no specification in the standard before trying to figure out the best way to proceed".
  17. Basically yes. There can be a few other valid (in this case "valid" meaning technically usable to get the wanted result and without breaking anything) options, though probably more complex. As you request I'll list here briefly, without the reasons/explanations inside the options themselves. If anyone who reads this does have a specific question, and can't find it on this thread, I'll be happy to answer with at least my understanding of what and why I present the option. Options to increase the record value length (change max length limit from record length of 80 to record-value length of 255), without making breaking changes inside a minor version change: Increase record-value length for everything. Do a major version increase, to 4.0. Devices/hosts that support it should treat accordingly (not changing to 3.13 automatically without manual configuration, while 3.12- systems are still out in the field). Increase only for specific records, which will not change "automatically" (without anyone entering new job data in different ways/formats) anyway if the limit is extended, . XSTATUS, LDPATH, ENGMARK... , as well as all experimental/vendor-specific labels. All other records still limited to 80 record length. Increase only for single-record labels (and all experimental/vendor-specific). All multi-record labels (D, R, A, ZA...) keep the 80 max record length (except for those that already explicitly extend it, nothing changes there) Increase only for single-record labels (...), which have at least one Text field. All other labels keep the 80 max record length (...). Increase only when communicating with something (device/host) that clearly identified itself as supporting 3.13 (e.g. a device that sent OMAV=3.13+ during initialization). Add option during initialization for host to respond with its own supported OMAV. Any records written/sent without this (during initialization, to files, to devices/hosts that didn't report their version) keep the 80 characters record length. Notice that 1, 2, and 5, are guaranteed to not cause any in-minor-version breaking changes, regardless of usage of any other labels (2 will require some manual work "now" to decide which standard labels are included to begin with, 5 should be robust, and works similar to other like changes in the past such as TXTENC, but therefore makes adopting the change more complicated). 3 and 4 seem to me to be enough to prevent any breaking changes (by themselves, merely from increasing the length) in practice, at least I wasn't personally able to think of a use cases where a problem will occur with them. And they should be clear to define, and relatively easy to implement. 3 basically excludes what is likely to cause problems, and 4 tries to excludes anything that probably doesn't have a valid reason for "needing" the extension (with its current purpose). (As an aside, this works "correctly" by being the cause for 4.0 (and delaying the currently planned 4.0 to 5.0). It's not relevant for combining with the current planned 4.0, since that involves a data format change to things like (from my understading) JSON / XML, which already don't have any forced max length limit on most data types, like Text.
  18. The concept is always more important in theory, but the nomenclature (name, quick description) is what people process first, and what they process when they just quickly skim through something, or search for something, or try to just refresh their memory for reference. If it's not possible to understand what something is, or what is the difference between two things, without going full into the details, the names/descriptions are bad. I'm not sure if the final decision was to keep this as a 2-character name (e.g. BE) or a 3-character name (BER), but maybe instead something like "FO" (or "FOB" if 3 letters, "FOBE" if 4), with a description like (trying to keep as close to the existing description as I can while clearly changing the intent) "Find OC from Back Engraving. The observed midpoint between the semi-visible alignment marks seen on an already finished lens, used to find OC on finished lenses. Origin of the back surface reference coordinate system" . Both the name and description can't be easily confused with any of the other points, and the purpose and usage is clear. (Note: Not sure if "finished" is a correct word there, the purpose was to clarify it's not used when making the lens, so it will be obvious as not relevant for devices during production in lab, and similarly obvious as relevant for inspections/checks/diagnostics later. I assume there's a term for it, that can't be maybe confused with a lens during finishing/edging, but at the moment can't recall what it would be. People who actually work with lenses/jobs at that stage would anyway have a much better idea than me about the correct terminology, or if this is or isn't confusing between the two states as-is) Also, I didn't see anyone respond to the topic of whether this point should really be together with the other Reference Points (on Table 2), given that all of the rest share coordinate space and are translatable in the same way (I think?), and this one requires special and unique handling. Maybe it should have its own sub-section?
  19. Both, or either, depending on context. Most of the marks are done on the back side, semi-visible marks, during surfacing, on lenses attached to surface blocks. That's the main usage of our engravers in labs. But there are also cases where we mark on the front, on finished and edged lenses, attached to finishing blocks (again, usually for things like adding logos and such, visibly). The same LMS/VCA-Host should be able to provide instructions (e.g. view ENGMARK labels) for both. Which is fine, ENGMARK supports both engraving on front and back. So ENGMARK labels for back will go to the "regular" system, and these are oriented around ER. ENGMARK labels for front will go through the "logo" system/module, and these are oriented around... what we're trying to decide here. In practice this is usually oriented to the frame anyway, so it won't matter. But it can potentially not be, and there should a standard interpretation on what the reference is on those cases.
  20. Again, I think that (for anyone who doesn't know in advance what it's supposed to mean) this is indeed confusing and potentially ambiguous in context. From the same Table 2, let's show the two items in sequence, I'll copy the first (BR) from your quote above (different from the 3.13 draft I have, I'll assume you're using a newer draft), and the second (ER) from the draft I have, which haven't changed for quite a few versions anyway: You see the issue? Both are "Engraving Reference Point", both are "midpoint between semi-visible alignment marks". The difference being that BE is explicitly stated to be on the "back", which isn't really a difference since practically always the marks will be in the back for engraving, when using ER, so the definition of ER might as well have "Back" in it. And the purpose/usage is very different. "ER" is used to determine where to engrave. "BE" is used from observing the engraving, from a different angle, to get OC. So changing the name of BE, and describing them differently, is important.
  21. I actually read Mark G. comment as essentially agreeing, saying his LMS will keep sending existing labels with 80 character limit, even though that limit is no longer in the standard, in order to ensure that devices will be able to keep working. Just like I was saying devices will similarly need to keep sending relevant labels to 80 to character limit, in order to not cause some Hosts to stop working. Also, again, the alternative isn't necessarily a major version increase. That one is required if the change is kept as-is, simply increasing the length. But it's also possible to limit the scope of the size increase, so that nothing will break. The issue is with existing already used labels, where the increase will cause an impact, and won't be expected by older software. Which are, in practice, the multi-record labels (e.g. D, R, A...). So if the length limit will be kept to 80 on all of those multi-record labels (that don't already have explicit length in their definition), and increased only for single-record labels (which beyond being the majority, are also probably where the increase was wanted, and what the purpose of the increase is for), that solves the problem. Otherwise, again, as I think Mark practically agreed, the change won't cause a problem because everyone will ignore the change, in a way differing from what the DCS states ("only increase on labels where required" vs "across the board").
  22. To be clear, you're still talking about marking on finished lenses, on the front side, from data provided by ENGMARK? If it's marking on the front side, doesn't matter for the sake of this discussion how you physically hold the lenses (finishing block or something else). Same job data records. If it's marking on the back side, then I'd think the base/center position should be the same ER regardless of whether the lens is finished or not. The relation between OC and ER haven't changed, so the same logic should apply. No? The question here is "what should be the base position when marking on the front side, from ENGMARK record "ENGMARK=?;?;?;?;?;F; ?" . If marking on the back side I think it's always ER, but I don't see a clear definition on front. From the quote from you email above, I still understand for you it's always so far OC, and for us so far it was FB (which I see how may not available/relevant to you if you don't have it when you do this). And there should be something agreed upon by everyone, so different labs would work regardless of what devices they have (the purpose of the DCS)
  23. OK, Paul, since you keep insisting, I listened (well, mostly did quick skips throughout, to locate the relevant section/s) to the recording of the VEW meeting. The topic of the record value length was discussed roughly from 14 to 29 minutes. Of these over 10 minutes, there were about 0 seconds (that I could find) dedicated to discussing whether this is a breaking change, and should it cause an increase of major version. There was, at the end of the session (3 hours 48 minutes), a discussion on 4.0, but it did not have any in discussion on whether the record length should be itself justify a major version, as the definition of OMAV demands. So since you keep responding to my points with, essentially, "this was discussed, listen to the discussion before you raise this again" , please let me know when it was discussed, since I absolutely can't find it in the recording. Also, at about 23 minutes, where most of the actual decisions were already made, mostly leaving the rest for details and bookkeeping, someone says something like "We'll put that for review, send it out, if labs have some reasonable objections" . That... gives the impression that it should be fine to post further comments and objections, for the purpose of receiving actual feedback from the involved members, rather than, essentially, "it's basically decided, so will be voted on as-is if nobody will decide to respond". New point: At around 21 minutes (though generally all over the section) there was a focus on a sub-decision to also limit the length of names of experimental/vendor-specific label names. Which, you know, technically is a valid decision, but the only reasons I could figure out was that it visually felt to some people to be too long, coupled with some snickers at the existing "Excessive label length should be avoided". Is that really a good reason to reduce the length of something, that may already be currently longer in practice in the field, and so possibly turning completely valid and compliant experimental/vendor-specific records under 3.12 to invalid records under 3.13? Sure, this might have been misused by some vendors, but the standard does say "should" and not "must", and "excessive" is open to personal interpretations. If someone created entirely valid "_IT_IS_VERY_IMPORTANT_TO_GET_THE_FULL_NAME_TO_AVOID_CONFUSION=1;2" records, just because it's "ugly" shouldn't be a reason to retroactively invalidate those. This is, again, potentially a breaking change for labs that use such labels. Would really appreciate the timestamp where this was discussed, or response from other committee members who were in the discussion and can comment on why they want to go ahead with a breaking change (or two), instead of adding some further limits to make it non-breaking (e.g. keep old max lengths for any previously introduced records, unless device and host explicitly identified as supporting 3.13, plus way for host to identify supported version).
  24. Haim, Thank you for the response. What I posted here was a continuation of an email discussion following the 3.13 committee draft emailed by Paul on Sep 11. So I have not read any further later documentation, or modifications, since. If the change you mention was discussed on the VEW meeting, or correspondence in another circle, then it's great you bring it up here, and that's one of the points we decided to transfer this to the forum. If the new point was changed (again, from that 3.13 committee draft) from BE to BER, then having it be a 3-letter position, while all the others are 2, does help with separating it from the rest, given the different handling. I think this should also be further separated in the DCS document by listing it separately from the other Reference Points, given that they generally share the same coordinate system, and this one doesn't. And, again, I think the name is confusing and misleading, even supported by reading your email. The thing is, there is the (by now long standing) Reference Point called ER, named "Engraving Reference". Which is where engraving/marking should be made from (base coordinate for engraving data). Where engraving done on the back side of the lens should be made from. So a new Reference Position, BE or BER (worse for this purpose), named "Back Engraving Reference", is a confusing name. It's basically the name of that other older point, even though this one isn't the reference position for engraving. It is, as you write, just a way used to find OC from where the center of a previously-made back-side engraving seems to be, looked at from a different direction. So it shouldn't be called "Back Engraving Reference", when it's functionally so different from the back-oriented "Engraving Reference". I think it's extremely likely for people who read the DCS, without reading the related discussions, could be confused.
  25. On one of the email from you, in the discussion before raising this on the forum, you sent: Which I took to mean that when your engravers or inkers need to mark, on finished lenses (so on a finishing block, so marking done on the front side of the lens), then the base position you use to detemine where anything is marked (e.g. with ENGMARK records), is 'OC' . So for example if you have "ENGMARK=TXT;O;;R;F;F;17.00;1.00;;;1.00;" , then under the current interpretation (as I understand from the quote above), your (for whatever "your" it is you covered in your email) systems will expect the "O" character to be marked on the front side of the lens, 17mm nasally from OC and 1mm above OC. In contrast to, as I wrote, what is so far the expectation from the labs we have front-side marking on finished lenses for (usually for logos, safety marks, and such), which would be to engrave this 17mm nasally from FB and 1mm above FB. ( and of course in contrast to if this was "ENGMARK=TXT;O;;R;F;B;..." , where for engraving on the back side this would be, I think in any case, 17mm nasally from ER and 1mm above ER ) So the open question here is what should be the base/reference for engraving (regardless of whether it's permanent or not, as I think is currently the agreement for back-side marking as well) when it should be done on the front side. For our (LaserOp) needs that is something that should be decided, since the DCS should be explicit on it, to avoid confusion (i.e. if already we have two vendors doing it differently, better to standardize now, while front-side marking is still in early days). But it's not urgent (I wouldn't delay the next version if there's no decision, though I'd strongly prefer there would be) because mostly (in practice, at least for our cases) the engraving in those cases is done relative to frame (e.g. all is type ENGMARK=MASK, where the in-layout objects are set to move relative to frame). So it's an important distinction in theory, but not yet in practice, for us, for just now.