It depends a bit on context. Cardinality means the number of something but it gets used in a variety of contexts.
PERSON
table, for example, GENDER
is likely to be a very low cardinality column (there are probably only two values in GENDER
) while PERSON_ID
is likely to be a very high cardinality column (every row will have a different value).There are probably other situations where people talk about cardinality using a different context and mean something else.
Views can:
And you should not design tables to match views. Your base model should concern itself with efficient storage and retrieval of the data. Views are partly a tool that mitigates the complexities that arise from an efficient, normalized model by allowing you to abstract that complexity.
Also, asking "what are the advantages of using a view over a table? " is not a great comparison. You can't go without tables, but you can do without views. They each exist for a very different reason. Tables are the concrete model and Views are an abstracted, well, View.
SQL> select Username from dba_users
2 ;
USERNAME
------------------------------
SYS
SYSTEM
ANONYMOUS
APEX_PUBLIC_USER
FLOWS_FILES
APEX_040000
OUTLN
DIP
ORACLE_OCM
XS$NULL
MDSYS
USERNAME
------------------------------
CTXSYS
DBSNMP
XDB
APPQOSSYS
HR
16 rows selected.
SQL> create user testdb identified by password;
User created.
SQL> select username from dba_users;
USERNAME
------------------------------
TESTDB
SYS
SYSTEM
ANONYMOUS
APEX_PUBLIC_USER
FLOWS_FILES
APEX_040000
OUTLN
DIP
ORACLE_OCM
XS$NULL
USERNAME
------------------------------
MDSYS
CTXSYS
DBSNMP
XDB
APPQOSSYS
HR
17 rows selected.
SQL> grant create session to testdb;
Grant succeeded.
SQL> create tablespace testdb_tablespace
2 datafile 'testdb_tabspace.dat'
3 size 10M autoextend on;
Tablespace created.
SQL> create temporary tablespace testdb_tablespace_temp
2 tempfile 'testdb_tabspace_temp.dat'
3 size 5M autoextend on;
Tablespace created.
SQL> drop user testdb;
User dropped.
SQL> create user testdb
2 identified by password
3 default tablespace testdb_tablespace
4 temporary tablespace testdb_tablespace_temp;
User created.
SQL> grant create session to testdb;
Grant succeeded.
SQL> grant create table to testdb;
Grant succeeded.
SQL> grant unlimited tablespace to testdb;
Grant succeeded.
SQL>
I suggest you use Float datatype for SQL Server.
You really need to keep two issues apart:
1) the primary key is a logical construct - one of the candidate keys that uniquely and reliably identifies every row in your table. This can be anything, really - an INT, a GUID, a string - pick what makes most sense for your scenario.
2) the clustering key (the column or columns that define the "clustered index" on the table) - this is a physical storage-related thing, and here, a small, stable, ever-increasing data type is your best pick - INT or BIGINT as your default option.
By default, the primary key on a SQL Server table is also used as the clustering key - but that doesn't need to be that way!
One rule of thumb I would apply is this: any "regular" table (one that you use to store data in, that is a lookup table etc.) should have a clustering key. There's really no point not to have a clustering key. Actually, contrary to common believe, having a clustering key actually speeds up all the common operations - even inserts and deletes (since the table organization is different and usually better than with a heap - a table without a clustering key).
Kimberly Tripp, the Queen of Indexing has a great many excellent articles on the topic of why to have a clustering key, and what kind of columns to best use as your clustering key. Since you only get one per table, it's of utmost importance to pick the right clustering key - and not just any clustering key.
Marc
Latitude maximum in total is: 9 (12.3456789), longitude 10 (123.4567890), they both have maximum 7 decimals chars (At least is what i can find in Google Maps),
For example, both columns in Rails and Postgresql looks something like this:
t.decimal :latitude, precision: 9, scale: 7
t.decimal :longitude, precision: 10, scale: 7
Depends what you mean by a pattern. If you're thinking Person/Company/Transaction/Product and such, then yes - there are a lot of generic database schemas already available.
If you're thinking Factory, Singleton... then no - you don't need any of these as they're too low level for DB programming.
If you're thinking database object naming, then it's under the category of conventions, not design per se.
BTW, S.Lott, one-to-many and many-to-many relationships aren't "patterns". They're the basic building blocks of the relational model.
I was able to call stored procedure in a view (SQL Server 2005).
CREATE FUNCTION [dbo].[dimMeasure]
RETURNS TABLE AS
(
SELECT * FROM OPENROWSET('SQLNCLI', 'Server=localhost; Trusted_Connection=yes;', 'exec ceaw.dbo.sp_dimMeasure2')
)
RETURN
GO
Inside stored procedure we need to set:
set nocount on
SET FMTONLY OFF
CREATE VIEW [dbo].[dimMeasure]
AS
SELECT * FROM OPENROWSET('SQLNCLI', 'Server=localhost;Trusted_Connection=yes;', 'exec ceaw.dbo.sp_dimMeasure2')
GO
Most answers here are strictly related to notations and syntax of the data models at different levels of abstraction. The key difference has not been mentioned by anyone. Conceptual models surface concepts. Concepts relate to other concepts in a different way that an Entity relates to another Entity at the Logical level of abstraction. Concepts are closer to Types. Usually at Conceptual level you display Types of things (this does not mean you must use the term "type" in your naming convention) and relationships between such types. Therefore, the existence of many-to-many relationships is not the rule but rather the consequence of the relationships between type-wise elements. In Logical Models Entities represent one instance of that thing in the real world. In Conceptual models it is not expected the description of an instance of an Entity and their relationships but rather the description of the "type" or "class" of that particular Entity. Examples: - Vehicles have Wheels and Wheels are used in Vehicles. At Conceptual level this is a many-to-many relationship - A particular Vehicle (a car by instance), with one specific registration number have 5 wheels and each particular wheel, each one with a serial number is related to only that particular car. At Logical level this is a one-to-many relationship.
Conceptual covers "types/classes". Logical covers "instances".
I would add another comment about databases. I agree with one of the colleagues who commented above that Conceptual and Logical models have absolutely nothing about databases. Conceptual and Logical models describe the real world from a data perspective using notations such as ER or UML. Database vendors, smartly, designed their products to follow the same philosophy used to logically model the World and them created Relational Databases, making everyone's lifes easier. You can describe your organisation's data landscape at all the levels using Conceptual and Logical model and never use a relational database.
Well I guess this is my 2 cents...
Yes, 4,2 means "4 digits total, 2 of which are after the decimal place". That translates to a number in the format of 00.00
. Beyond that, you'll have to show us your SQL query. PHP won't translate 3.80 into 99.99 without good reason. Perhaps you've misaligned your fields/values in the query and are trying to insert a larger number that belongs in another field.
I've recently started using http://schemaspy.sourceforge.net/ . It uses GraphViz, and it strikes me as having a good balance between usability and simplicity.
Yes scaling horizontally means adding more machines, but it also implies that the machines are equal in the cluster. MySQL can scale horizontally in terms of Reading data, through the use of replicas, but once it reaches capacity of the server mem/disk, you have to begin sharding data across servers. This becomes increasingly more complex. Often keeping data consistent across replicas is a problem as replication rates are often too slow to keep up with data change rates.
Couchbase is also a fantastic NoSQL Horizontal Scaling database, used in many commercial high availability applications and games and arguably the highest performer in the category. It partitions data automatically across cluster, adding nodes is simple, and you can use commodity hardware, cheaper vm instances (using Large instead of High Mem, High Disk machines at AWS for instance). It is built off the Membase (Memcached) but adds persistence. Also, in the case of Couchbase, every node can do reads and writes, and are equals in the cluster, with only failover replication (not full dataset replication across all servers like in mySQL).
Performance-wise, you can see an excellent Cisco benchmark: http://blog.couchbase.com/understanding-performance-benchmark-published-cisco-and-solarflare-using-couchbase-server
Here is a great blog post about Couchbase Architecture: http://horicky.blogspot.com/2012/07/couchbase-architecture.html
You need to set the "default value" for the date field to getdate()
. Any records inserted into the table will automatically have the insertion date as their value for this field.
The location of the "default value" property is dependent on the version of SQL Server Express you are running, but it should be visible if you select the date field of your table when editing the table.
The history seem to look like this:
Google needs a storage layer for their inverted search index. They figure a traditional RDBMS is not going to cut it. So they implement a NoSQL data store, BigTable on top of their GFS file system. The major part is that thousands of cheap commodity hardware machines provides the speed and the redundancy.
Everyone else realizes what Google just did.
Brewers CAP theorem is proven. All RDBMS systems of use are CA systems. People begin playing with CP and AP systems as well. K/V stores are vastly simpler, so they are the primary vehicle for the research.
Software-as-a-service systems in general do not provide an SQL-like store. Hence, people get more interested in the NoSQL type stores.
I think much of the take-off can be related to this history. Scaling Google took some new ideas at Google and everyone else follows suit because this is the only solution they know to the scaling problem right now. Hence, you are willing to rework everything around the distributed database idea of Google because it is the only way to scale beyond a certain size.
C - Consistency
A - Availability
P - Partition tolerance
K/V - Key/Value
go to services and start the ones related to SQL
The second approach is best.
If you want to normalize it further you could create a table for question types
The simple things to do are:
We have had log tables in SQL Server Table with 10's of millions rows.
I suggest you search the blogs of Jason Massie (http://statisticsio.com/) and Brent Ozar (http://www.brentozar.com/) for related info. They have some post about real-life scenario that deals with indexes.
You can create a materialized/indexed views on the table. Based on your requirement you can do full or partial update of the views. Please see this to create mview and log. How to create materialized views in SQL Server?
Use a single formatted text column[1] for storing the tags and use a capable full text search engine to index this. Else you will run into scaling problems when trying to implement boolean queries.
If you need details about the tags you have, you can either keep track of it in a incrementally maintained table or run a batch job to extract the information.
[1] Some RDBMS even provide a native array type which might be even better suited for storage by not needing a parsing step, but might cause problems with the full text search.
For IPv4, you could get away with storing the 4 raw bytes of the IP address (each of the numbers between the periods in an IP address are 0-255, i.e., one byte). But then you would have to translate going in and out of the DB and that's messy.
IPv6 addresses are 128 bits (as opposed to 32 bits of IPv4 addresses). They are usually written as 8 groups of 4 hex digits separated by colons: 2001:0db8:85a3:0000:0000:8a2e:0370:7334
. 39 characters is appropriate to store IPv6 addresses in this format.
Edit: However, there is a caveat, see @Deepak's answer for details about IPv4-mapped IPv6 addresses. (The correct maximum IPv6 string length is 45 characters.)
I do not know of a row limit, but I know tables with more than 170 million rows. You may speed it up using partitioned tables (2005+) or views that connect multiple tables.
In medicine there are four genders: male, female, indeterminate, and unknown. You mightn't need all four but you certainly need 1, 2, and 4. It's not appropriate to have a default value for this datatype. Even less to treat it as a Boolean with 'is' and 'isn't' states.
One item is going to have many tags. And one tag will belong to many items. This implies to me that you'll quite possibly need an intermediary table to overcome the many-to-many obstacle.
Something like:
Table: Items
Columns: Item_ID, Item_Title, Content
Table: Tags
Columns: Tag_ID, Tag_Title
Table: Items_Tags
Columns: Item_ID, Tag_ID
It might be that your web app is very very popular and need de-normalizing down the road, but it's pointless muddying the waters too early.
1) Poor understanding of how to properly interact between Java and the database.
2) Over parsing, improper or no reuse of SQL
3) Failing to use BIND variables
4) Implementing procedural logic in Java when SQL set logic in the database would have worked (better).
5) Failing to do any reasonable performance or scalability testing prior to going into production
6) Using Crystal Reports and failing to set the schema name properly in the reports
7) Implementing SQL with Cartesian products due to ignorance of the execution plan (did you even look at the EXPLAIN PLAN?)
The W3Schools example isn't saying when you should use compound primary keys, and is only giving example syntax using the same example table as for other keys.
Their choice of example is perhaps misleading you by combining a meaningless key (P_Id) and a natural key (LastName). This odd choice of primary key says that the following rows are valid according to the schema and are necessary to uniquely identify a student. Intuitively this doesn't make sense.
1234 Jobs
1234 Gates
Further Reading: The great primary-key debate or just Google meaningless primary keys
or even peruse this SO question
FWIW - My 2 cents is to avoid multi-column primary keys and use a single generated id field (surrogate key) as the primary key and add additional (unique) constraints where necessary.
This is with reference to @Ladislav Mrnka's answer on using fluent api for configuring one-to-one relationship.
Had a situation where having FK of dependent must be it's PK
was not feasible.
E.g., Foo
already has one-to-many relationship with Bar
.
public class Foo {
public Guid FooId;
public virtual ICollection<> Bars;
}
public class Bar {
//PK
public Guid BarId;
//FK to Foo
public Guid FooId;
public virtual Foo Foo;
}
Now, we had to add another one-to-one relationship between Foo and Bar.
public class Foo {
public Guid FooId;
public Guid PrimaryBarId;// needs to be removed(from entity),as we specify it in fluent api
public virtual Bar PrimaryBar;
public virtual ICollection<> Bars;
}
public class Bar {
public Guid BarId;
public Guid FooId;
public virtual Foo PrimaryBarOfFoo;
public virtual Foo Foo;
}
Here is how to specify one-to-one relationship using fluent api:
modelBuilder.Entity<Bar>()
.HasOptional(p => p.PrimaryBarOfFoo)
.WithOptionalPrincipal(o => o.PrimaryBar)
.Map(x => x.MapKey("PrimaryBarId"));
Note that while adding PrimaryBarId
needs to be removed, as we specifying it through fluent api.
Also note that method name [WithOptionalPrincipal()][1]
is kind of ironic. In this case, Principal is Bar. WithOptionalDependent() description on msdn makes it more clear.
The table is only dropped and re-created in cases where that's the only way SQL Server's Management Studio has been programmed to know how to do it.
There are certainly cases where it will do that when it doesn't need to, but there will also be cases where edits you make in Management Studio will not drop and re-create because it doesn't have to.
The problem is that enumerating all of the cases and determining which side of the line they fall on will be quite tedious.
This is why I like to use ALTER TABLE
in a query window, instead of visual designers that hide what they're doing (and quite frankly have bugs) - I know exactly what is going to happen, and I can prepare for cases where the only possibility is to drop and re-create the table (which is some number less than how often SSMS will do that to you).
Here are some real-world examples of the types of relationships:
A relationship is one-to-one if and only if one record from table A is related to a maximum of one record in table B.
To establish a one-to-one relationship, the primary key of table B (with no orphan record) must be the secondary key of table A (with orphan records).
For example:
CREATE TABLE Gov(
GID number(6) PRIMARY KEY,
Name varchar2(25),
Address varchar2(30),
TermBegin date,
TermEnd date
);
CREATE TABLE State(
SID number(3) PRIMARY KEY,
StateName varchar2(15),
Population number(10),
SGID Number(4) REFERENCES Gov(GID),
CONSTRAINT GOV_SDID UNIQUE (SGID)
);
INSERT INTO gov(GID, Name, Address, TermBegin)
values(110, 'Bob', '123 Any St', '1-Jan-2009');
INSERT INTO STATE values(111, 'Virginia', 2000000, 110);
A relationship is one-to-many if and only if one record from table A is related to one or more records in table B. However, one record in table B cannot be related to more than one record in table A.
To establish a one-to-many relationship, the primary key of table A (the "one" table) must be the secondary key of table B (the "many" table).
For example:
CREATE TABLE Vendor(
VendorNumber number(4) PRIMARY KEY,
Name varchar2(20),
Address varchar2(20),
City varchar2(15),
Street varchar2(2),
ZipCode varchar2(10),
Contact varchar2(16),
PhoneNumber varchar2(12),
Status varchar2(8),
StampDate date
);
CREATE TABLE Inventory(
Item varchar2(6) PRIMARY KEY,
Description varchar2(30),
CurrentQuantity number(4) NOT NULL,
VendorNumber number(2) REFERENCES Vendor(VendorNumber),
ReorderQuantity number(3) NOT NULL
);
A relationship is many-to-many if and only if one record from table A is related to one or more records in table B and vice-versa.
To establish a many-to-many relationship, create a third table called "ClassStudentRelation" which will have the primary keys of both table A and table B.
CREATE TABLE Class(
ClassID varchar2(10) PRIMARY KEY,
Title varchar2(30),
Instructor varchar2(30),
Day varchar2(15),
Time varchar2(10)
);
CREATE TABLE Student(
StudentID varchar2(15) PRIMARY KEY,
Name varchar2(35),
Major varchar2(35),
ClassYear varchar2(10),
Status varchar2(10)
);
CREATE TABLE ClassStudentRelation(
StudentID varchar2(15) NOT NULL,
ClassID varchar2(14) NOT NULL,
FOREIGN KEY (StudentID) REFERENCES Student(StudentID),
FOREIGN KEY (ClassID) REFERENCES Class(ClassID),
UNIQUE (StudentID, ClassID)
);
oltp- mostly used for business transaction.used to collect business data.In sql we use insert,update and delete command for retrieving small source of data.like wise they are highly normalised.... OLTP Mostly used for maintaining the data integrity.
olap- mostly use for reporting,data mining and business analytic purpose. for the large or bulk data.deliberately it is de-normalised. it stores Historical data..
I would follow this guide: https://github.com/bmoeskau/Extensible/blob/master/recurrence-overview.md
Also make sure you use the iCal format so not to reinvent the wheel and remember Rule #0: Do NOT store individual recurring event instances as rows in your database!
If you need to query on the list, then store it in a table.
If you always want the list, you could store it as a delimited list in a column. Even in this case, unless you have VERY specific reasons not to, store it in a lookup table.
"Line 1" - maybe something about windows vs unix newlines? (as i saw windows 7 mentioned above).
Alternatively, consider using a document databases (such as MongoDB) which natively support rich data structures and nesting.
It won't be fast, and there will be a lot for you to type (unless you generate the SQL from user_tab_columns), but here is what I use when I need to compare two tables row-by-row and column-by-column.
The query will return all rows that
(common identical rows will be excluded).
"PK" is the column(s) that make up your primary key. "a" will contain A if the present row exists in table1. "b" will contain B if the present row exists in table2.
select pk
,decode(a.rowid, null, null, 'A') as a
,decode(b.rowid, null, null, 'B') as b
,a.col1, b.col1
,a.col2, b.col2
,a.col3, b.col3
,...
from table1 a
full outer
join table2 b using(pk)
where decode(a.col1, b.col1, 1, 0) = 0
or decode(a.col2, b.col2, 1, 0) = 0
or decode(a.col3, b.col3, 1, 0) = 0
or ...;
Edit Added example code to show the difference described in comment. Whenever one of the values contains NULL, the result will be different.
with a as(
select 0 as col1 from dual union all
select 1 as col1 from dual union all
select null as col1 from dual
)
,b as(
select 1 as col1 from dual union all
select 2 as col1 from dual union all
select null as col1 from dual
)
select a.col1
,b.col1
,decode(a.col1, b.col1, 'Same', 'Different') as approach_1
,case when a.col1 <> b.col1 then 'Different' else 'Same' end as approach_2
from a,b
order
by a.col1
,b.col1;
col1 col1_1 approach_1 approach_2
==== ====== ========== ==========
0 1 Different Different
0 2 Different Different
0 null Different Same <---
1 1 Same Same
1 2 Different Different
1 null Different Same <---
null 1 Different Same <---
null 2 Different Same <---
null null Same Same
I would think that for a large part your or your client's requirements should dictate what precision and scale to use. For example, for the e-commerce website I am working on that deals with money in GBP only, I have been required to keep it to Decimal( 6, 2 ).
My answer is similar to this one on ServerFault.com.
If you want to be more conservative than granting "all privileges", you might want to try something more like these.
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO some_user_;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO some_user_;
The use of public
there refers to the name of the default schema created for every new database/catalog. Replace with your own name if you created a schema.
To access a schema at all, for any action, the user must be granted "usage" rights. Before a user can select, insert, update, or delete, a user must first be granted "usage" to a schema.
You will not notice this requirement when first using Postgres. By default every database has a first schema named public
. And every user by default has been automatically been granted "usage" rights to that particular schema. When adding additional schema, then you must explicitly grant usage rights.
GRANT USAGE ON SCHEMA some_schema_ TO some_user_ ;
Excerpt from the Postgres doc:
For schemas, allows access to objects contained in the specified schema (assuming that the objects' own privilege requirements are also met). Essentially this allows the grantee to "look up" objects within the schema. Without this permission, it is still possible to see the object names, e.g. by querying the system tables. Also, after revoking this permission, existing backends might have statements that have previously performed this lookup, so this is not a completely secure way to prevent object access.
For more discussion see the Question, What GRANT USAGE ON SCHEMA exactly do?. Pay special attention to the Answer by Postgres expert Craig Ringer.
These commands only affect existing objects. Tables and such you create in the future get default privileges until you re-execute those lines above. See the other answer by Erwin Brandstetter to change the defaults thereby affecting future objects.
Well considering there's no overhead difference between a varchar(30) and a varchar(100) if you're only storing 20 characters in each, err on the side of caution and just make it 50.
I recommend the answer posted by Martin.
But you seem to be concerned about your queries getting too complex:
To create localized table for every table is making design and querying complex...
So you might be thinking, that instead of writing simple queries like this:
SELECT price, name, description FROM Products WHERE price < 100
...you would need to start writing queries like that:
SELECT
p.price, pt.name, pt.description
FROM
Products p JOIN ProductTranslations pt
ON (p.id = pt.id AND pt.lang = "en")
WHERE
price < 100
Not a very pretty perspective.
But instead of doing it manually you should develop your own database access class, that pre-parses the SQL that contains your special localization markup and converts it to the actual SQL you will need to send to the database.
Using that system might look something like this:
db.setLocale("en");
db.query("SELECT p.price, _(p.name), _(p.description)
FROM _(Products p) WHERE price < 100");
And I'm sure you can do even better that that.
The key is to have your tables and fields named in uniform way.
delete from t
where id in (1, 4, 6, 7)
Will you ever need to join this table to other tables? Do you need a way to uniquely identify a record? If the answer is yes, you need a primary key. Assume your data is something like a customer table that has the names of the people who are customers. There may be no natural key because you need the addresses, emails, phone numbers, etc. to determine if this Sally Smith is different from that Sally Smith and you will be storing that information in related tables as the person can have mulitple phones, addesses, emails, etc. Suppose Sally Smith marries John Jones and becomes Sally Jones. If you don't have an artifical key onthe table, when you update the name, you just changed 7 Sally Smiths to Sally Jones even though only one of them got married and changed her name. And of course in this case withouth an artificial key how do you know which Sally Smith lives in Chicago and which one lives in LA?
You say you have no natural key, therefore you don't have any combinations of field to make unique either, this makes the artficial key critical.
I have found anytime I don't have a natural key, an artifical key is an absolute must for maintaining data integrity. If you do have a natural key, you can use that as the key field instead. But personally unless the natural key is one field, I still prefer an artifical key and unique index on the natural key. You will regret it later if you don't put one in.
Having a table with a foreign key to itself does make sense to me.
You can then use a common table expression in SQL or the connect by prior statement in Oracle to build your tree.
I would probably use an integer as your primary key, and then just have your string (I assume it's some sort of ID) as a separate column.
create table sample (
sample_pk INT NOT NULL AUTO_INCREMENT,
sample_id VARCHAR(100) NOT NULL,
...
PRIMARY KEY(sample_pk)
);
You can always do queries and joins conditionally on the string (ID) column (where sample_id = ...).
Here is a simple example for others visiting this old post, but is confused by the example in the question and the other answer:
Delivery -> Package (One -> Many)
CREATE TABLE Delivery(
Id INT IDENTITY PRIMARY KEY,
NoteNumber NVARCHAR(255) NOT NULL
)
CREATE TABLE Package(
Id INT IDENTITY PRIMARY KEY,
Status INT NOT NULL DEFAULT 0,
Delivery_Id INT NOT NULL,
CONSTRAINT FK_Package_Delivery_Id FOREIGN KEY (Delivery_Id) REFERENCES Delivery (Id) ON DELETE CASCADE
)
The entry with the foreign key Delivery_Id (Package) is deleted with the referenced entity in the FK relationship (Delivery).
So when a Delivery is deleted the Packages referencing it will also be deleted. If a Package is deleted nothing happens to any deliveries.
First modify the column to remove the auto_increment field like this: alter table user_customer_permission modify column id int;
Next, drop the primary key. alter table user_customer_permission drop primary key;
Primary keys are for uniquely identifying rows. This is done by comparing all parts of a key to the input.
Per definition, NULL cannot be part of a successful comparison. Even a comparison to itself (NULL = NULL
) will fail. This means a key containing NULL would not work.
Additonally, NULL is allowed in a foreign key, to mark an optional relationship.(*) Allowing it in the PK as well would break this.
(*)A word of caution: Having nullable foreign keys is not clean relational database design.
If there are two entities A
and B
where A
can optionally be related to B
, the clean solution is to create a resolution table (let's say AB
). That table would link A
with B
: If there is a relationship then it would contain a record, if there isn't then it would not.
You can insert into a table from a SELECT.
INSERT INTO
Employee (emp_id, emp_name, emp_address, emp_state, emp_position, emp_manager)
SELECT
001,
'John Doe',
'1 River Walk, Green Street',
(SELECT id FROM state WHERE name = 'New York'),
(SELECT id FROM positions WHERE name = 'Sales Executive'),
(SELECT id FROM manager WHERE name = 'Barry Green')
FROM
dual
Or, similarly...
INSERT INTO
Employee (emp_id, emp_name, emp_address, emp_state, emp_position, emp_manager)
SELECT
001,
'John Doe',
'1 River Walk, Green Street',
state.id,
positions.id,
manager.id
FROM
state
CROSS JOIN
positions
CROSS JOIN
manager
WHERE
state.name = 'New York'
AND positions.name = 'Sales Executive'
AND manager.name = 'Barry Green'
Though this one does assume that all the look-ups exist. If, for example, there is no position name 'Sales Executive', nothing would get inserted with this version.
So the length is between 10 16-bit chars, and 40 hex digits.
In any case decide the format you are going to store, and make the field a fixed size based on that format. That way you won't have any wasted space.
In the project I'm working on, audit log also started from the very minimalistic design, like the one you described:
event ID
event date/time
event type
user ID
description
The idea was the same: to keep things simple.
However, it quickly became obvious that this minimalistic design was not sufficient. The typical audit was boiling down to questions like this:
Who the heck created/updated/deleted a record
with ID=X in the table Foo and when?
So, in order to be able to answer such questions quickly (using SQL), we ended up having two additional columns in the audit table
object type (or table name)
object ID
That's when design of our audit log really stabilized (for a few years now).
Of course, the last "improvement" would work only for tables that had surrogate keys. But guess what? All our tables that are worth auditing do have such a key!
Create two partial indexes:
CREATE UNIQUE INDEX favo_3col_uni_idx ON favorites (user_id, menu_id, recipe_id)
WHERE menu_id IS NOT NULL;
CREATE UNIQUE INDEX favo_2col_uni_idx ON favorites (user_id, recipe_id)
WHERE menu_id IS NULL;
This way, there can only be one combination of (user_id, recipe_id)
where menu_id IS NULL
, effectively implementing the desired constraint.
Possible drawbacks: you cannot have a foreign key referencing (user_id, menu_id, recipe_id)
, you cannot base CLUSTER
on a partial index, and queries without a matching WHERE
condition cannot use the partial index. (It seems unlikely you'd want a FK reference three columns wide - use the PK column instead).
If you need a complete index, you can alternatively drop the WHERE
condition from favo_3col_uni_idx
and your requirements are still enforced.
The index, now comprising the whole table, overlaps with the other one and gets bigger. Depending on typical queries and the percentage of NULL
values, this may or may not be useful. In extreme situations it might even help to maintain all three indexes (the two partial ones and a total on top).
Aside: I advise not to use mixed case identifiers in PostgreSQL.
Essential Database Naming Conventions (and Style) (click here for more detailed description)
table names choose short, unambiguous names, using no more than one or two words distinguish tables easily facilitates the naming of unique field names as well as lookup and linking tables give tables singular names, never plural (update: i still agree with the reasons given for this convention, but most people really like plural table names, so i’ve softened my stance)... follow the link above please
This is an issue of upbringing. If somewhere in your educational or professional career you spent time feeding and caring for databases (or worked closely with talented folks who did), then the fundamental tenets of entities and relationships are well-ingrained in your thought process. Among those rudiments is how/when/why to specify keys in your database (primary, foreign and perhaps alternate). It's second nature.
If, however, you've not had such a thorough or positive experience in your past with RDBMS-related endeavors, then you've likely not been exposed to such information. Or perhaps your past includes immersion in an environment that was vociferously anti-database (e.g., "those DBAs are idiots - we few, we chosen few java/c# code slingers will save the day"), in which case you might be vehemently opposed to the arcane babblings of some dweeb telling you that FKs (and the constraints they can imply) really are important if you'd just listen.
Most everyone was taught when they were kids that brushing your teeth was important. Can you get by without it? Sure, but somewhere down the line you'll have less teeth available than you could have if you had brushed after every meal. If moms and dads were responsible enough to cover database design as well as oral hygiene, we wouldn't be having this conversation. :-)
Probably because both SQL Server and Sybase (to name two I am familiar with) used to have a 255 character maximum in the number of characters in a VARCHAR
column. For SQL Server, this changed in version 7 in 1996/1997 or so... but old habits sometimes die hard.
you should use UNSIGNED ZEROFILL
in your table structure.
You could store it as an integer of the number of minutes past midnight:
eg.
0 = 00:00
60 = 01:00
252 = 04:12
You would however need to write some code to reconstitute the time, but that shouldn't be tricky.
When indexed view is not an option, and quick updates are not necessary, you can create a hack cache table:
select * into cachetablename from myviewname
alter table cachetablename add primary key (columns)
-- OR alter table cachetablename add rid bigint identity primary key
create index...
then sp_rename view/table or change any queries or other views that reference it to point to the cache table.
schedule daily/nightly/weekly/whatnot refresh like
begin transaction
truncate table cachetablename
insert into cachetablename select * from viewname
commit transaction
NB: this will eat space, also in your tx logs. Best used for small datasets that are slow to compute. Maybe refactor to eliminate "easy but large" columns first into an outer view.
A complement to Daniel Dinnyes' answer:
On a non-identifying relationship, you can't have the same Primary Key column (let's say, "ID") twice with the same value.
However, with an identifyinig relationship, you can have the same value show up twice for the "ID" column, as long as it has a different "otherColumn_ID" Foreign Key value, because the primary key is the combination of both columns.
Note that it doesn't matter if the FK is "non-null" or not! ;-)
I really liked this article.. http://www.codeproject.com/Articles/359654/important-database-designing-rules-which-I-fo
I've used MongoDB extensively (for the past 6 months), building a hierarchical data management system, and I can vouch for both the ease of setup (install it, run it, use it!) and the speed. As long as you think about indexes carefully, it can absolutely scream along, speed-wise.
I gather that Cassandra, due to its use with large-scale projects like Twitter, has better scaling functionality, although the MongoDB team is working on parity there. I should point out that I've not used Cassandra beyond the trial-run stage, so I can't speak for the detail.
The real swinger for me, when we were assessing NoSQL databases, was the querying - Cassandra is basically just a giant key/value store, and querying is a bit fiddly (at least compared to MongoDB), so for performance you'd have to duplicate quite a lot of data as a sort of manual index. MongoDB, on the other hand, uses a "query by example" model.
For example, say you've got a Collection (MongoDB parlance for the equivalent to a RDMS table) containing Users. MongoDB stores records as Documents, which are basically binary JSON objects. e.g:
{
FirstName: "John",
LastName: "Smith",
Email: "[email protected]",
Groups: ["Admin", "User", "SuperUser"]
}
If you wanted to find all of the users called Smith who have Admin rights, you'd just create a new document (at the admin console using Javascript, or in production using the language of your choice):
{
LastName: "Smith",
Groups: "Admin"
}
...and then run the query. That's it. There are added operators for comparisons, RegEx filtering etc, but it's all pretty simple, and the Wiki-based documentation is pretty good.
Probably there is a table, which stores the friend <-> user relation, say "frnd_list", having fields 'user_id','frnd_id'.
Whenever a user adds another user as a friend, two new rows are created.
For instance, suppose my id is 'deep9c' and I add a user having id 'akash3b' as my friend, then two new rows are created in table "frnd_list" with values ('deep9c','akash3b') and ('akash3b','deep9c').
Now when showing the friends-list to a particular user, a simple sql would do that: "select frnd_id from frnd_list where user_id=" where is the id of the logged-in user (stored as a session-attribute).
Table or record locking is available in Access during data writes. You can control the Default record locking through Tools | Options | Advanced tab:
You can set this on a form's Record Locks or in your DAO/ADO code for specific needs.
Transactions shouldn't be a problem if you use them correctly.
Best practice: Separate your tables from All your other code. Give each user their own copy of the code file and then share the data file on a network server. Work on a 'test' copy of the code (and a link to a test data file) and then update user's individual code files separately. If you need to make data file changes (add tables, columns, etc), you will have to have all users get out of the application to make the changes.
See other answers for Oracle comparison.
As noted by the others it is possible to have multi-column primary keys. It should be noted however that if you have some functional dependencies that are not introduced by a key, you should consider normalizing your relation.
Example:
Person(id, name, email, street, zip_code, area)
There can be a functional dependency between id -> name,email, street, zip_code and area
But often a zip_code
is associated with a area
and thus there is an internal functional dependecy between zip_code -> area
.
Thus one may consider splitting it into another table:
Person(id, name, email, street, zip_code)
Area(zip_code, name)
So that it is consistent with the third normal form.
GUIDs may seem to be a natural choice for your primary key - and if you really must, you could probably argue to use it for the PRIMARY KEY of the table. What I'd strongly recommend not to do is use the GUID column as the clustering key, which SQL Server does by default, unless you specifically tell it not to.
You really need to keep two issues apart:
the primary key is a logical construct - one of the candidate keys that uniquely and reliably identifies every row in your table. This can be anything, really - an INT
, a GUID
, a string - pick what makes most sense for your scenario.
the clustering key (the column or columns that define the "clustered index" on the table) - this is a physical storage-related thing, and here, a small, stable, ever-increasing data type is your best pick - INT
or BIGINT
as your default option.
By default, the primary key on a SQL Server table is also used as the clustering key - but that doesn't need to be that way! I've personally seen massive performance gains when breaking up the previous GUID-based Primary / Clustered Key into two separate key - the primary (logical) key on the GUID, and the clustering (ordering) key on a separate INT IDENTITY(1,1)
column.
As Kimberly Tripp - the Queen of Indexing - and others have stated a great many times - a GUID
as the clustering key isn't optimal, since due to its randomness, it will lead to massive page and index fragmentation and to generally bad performance.
Yes, I know - there's newsequentialid()
in SQL Server 2005 and up - but even that is not truly and fully sequential and thus also suffers from the same problems as the GUID
- just a bit less prominently so.
Then there's another issue to consider: the clustering key on a table will be added to each and every entry on each and every non-clustered index on your table as well - thus you really want to make sure it's as small as possible. Typically, an INT
with 2+ billion rows should be sufficient for the vast majority of tables - and compared to a GUID
as the clustering key, you can save yourself hundreds of megabytes of storage on disk and in server memory.
Quick calculation - using INT
vs. GUID
as Primary and Clustering Key:
TOTAL: 25 MB vs. 106 MB - and that's just on a single table!
Some more food for thought - excellent stuff by Kimberly Tripp - read it, read it again, digest it! It's the SQL Server indexing gospel, really.
PS: of course, if you're dealing with just a few hundred or a few thousand rows - most of these arguments won't really have much of an impact on you. However: if you get into the tens or hundreds of thousands of rows, or you start counting in millions - then those points become very crucial and very important to understand.
Update: if you want to have your PKGUID
column as your primary key (but not your clustering key), and another column MYINT
(INT IDENTITY
) as your clustering key - use this:
CREATE TABLE dbo.MyTable
(PKGUID UNIQUEIDENTIFIER NOT NULL,
MyINT INT IDENTITY(1,1) NOT NULL,
.... add more columns as needed ...... )
ALTER TABLE dbo.MyTable
ADD CONSTRAINT PK_MyTable
PRIMARY KEY NONCLUSTERED (PKGUID)
CREATE UNIQUE CLUSTERED INDEX CIX_MyTable ON dbo.MyTable(MyINT)
Basically: you just have to explicitly tell the PRIMARY KEY
constraint that it's NONCLUSTERED
(otherwise it's created as your clustered index, by default) - and then you create a second index that's defined as CLUSTERED
This will work - and it's a valid option if you have an existing system that needs to be "re-engineered" for performance. For a new system, if you start from scratch, and you're not in a replication scenario, then I'd always pick ID INT IDENTITY(1,1)
as my clustered primary key - much more efficient than anything else!
Here's a quick, admittedly butchered response, but in a sentence:
1NF : Your table is organized as an unordered set of data, and there are no repeating columns.
2NF: You don't repeat data in one column of your table because of another column.
3NF: Every column in your table relates only to your table's key -- you wouldn't have a column in a table that describes another column in your table which isn't the key.
For more detail, see wikipedia...
I suggest storing the numbers in a varchar without formatting. Then you can just reformat the numbers on the client side appropriately. Some cultures prefer to have phone numbers written differently; in France, they write phone numbers like 01-22-33-44-55.
You might also consider storing another field for the country that the phone number is for, because this can be difficult to figure out based on the number you are looking at. The UK uses 11 digit long numbers, some African countries use 7 digit long numbers.
That said, I used to work for a UK phone company, and we stored phone numbers in our database based on if they were UK or international. So, a UK phone number would be 02081234123 and an international one would be 001800300300.
I tried DBSchema. Nice features, but wildly slow for a database with about 75 tables. Unusable.
A character encoding is a way to encode characters so that they fit in memory. That is, if the charset is ISO-8859-15, the euro symbol, €, will be encoded as 0xa4, and in UTF-8, it will be 0xe282ac.
The collation is how to compare characters, in latin9, there are letters as e é è ê f
, if sorted by their binary representation, it will go e f é ê è
but if the collation is set to, for example, French, you'll have them in the order you thought they would be, which is all of e é è ê
are equal, and then f
.
To add a very important note on what Mark S. has mentioned in his post. In the specific SQL Script that has been mentioned in the question you can NEVER mention two different file groups for storing your data rows and the index data structure.
The reason why is due to the fact that the index being created in this case is a clustered Index on your primary key column. The clustered index data and the data rows of your table can NEVER be on different file groups.
So in case you have two file groups on your database e.g. PRIMARY and SECONDARY then below mentioned script will store your row data and clustered index data both on PRIMARY file group itself even though I've mentioned a different file group ([SECONDARY]
) for the table data. More interestingly the script runs successfully as well (when I was expecting it to give an error as I had given two different file groups :P). SQL Server does the trick behind the scene silently and smartly.
CREATE TABLE [dbo].[be_Categories](
[CategoryID] [uniqueidentifier] ROWGUIDCOL NOT NULL CONSTRAINT [DF_be_Categories_CategoryID] DEFAULT (newid()),
[CategoryName] [nvarchar](50) NULL,
[Description] [nvarchar](200) NULL,
[ParentID] [uniqueidentifier] NULL,
CONSTRAINT [PK_be_Categories] PRIMARY KEY CLUSTERED
(
[CategoryID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [SECONDARY]
GO
NOTE: Your index can reside on a different file group ONLY if the index being created is non-clustered in nature.
The below script which creates a non-clustered index will get created on [SECONDARY]
file group instead when the table data already resides on [PRIMARY]
file group:
CREATE NONCLUSTERED INDEX [IX_Categories] ON [dbo].[be_Categories]
(
[CategoryName] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [Secondary]
GO
You can get more information on how storing non-clustered indexes on a different file group can help your queries perform better. Here is one such link.
If something is a primary key, depending on your DB engine, the entire table gets sorted by the primary key. This means that lookups are much faster on the primary key because it doesn't have to do any dereferencing as it has to do with any other kind of index. Besides that, it's just theory.
For anyone getting to this thread late, you may want to take a look back at the .Net Base Class Library (BCL). Many people missed the changes between .Net 1.1 and .Net 2.0 when the TraceSource class was introduced (circa 2005).
Using the TraceSource is analagous to other logging frameworks, with granular control of logging, configuration in app.config/web.config, and programmatic access - without the overhead of the enterprise application block.
There are also a number of comparisons floating around: "log4net vs TraceSource"
Same problem fetch main problem in PreparedStatement
use simple statement then you successfully insert record same use below.
String st2="insert into
user(gender,name,address,telephone,fax,email,
destination,sdate,edate,Participant,hcategory,
Culture,Nature,People,Cities,Beaches,Festivals,username,password)
values('"+gender+"','"+name+"','"+address+"','"+phone+"','"+fax+"',
'"+email+"','"+desti+"','"+sdate+"','"+edate+"','"+parti+"',
'"+hotel+"','"+chk1+"','"+chk2+"','"+chk3+"','"+chk4+"',
'"+chk5+"','"+chk6+"','"+user+"','"+password+"')";
int i=stm.executeUpdate(st2);
For the record:
"Data at the root level is invalid" means that you have attempted to parse something that is not an XML document. It doesn't even start to look like an XML document. It usually means just what you found: you're parsing something like the string "C:\inetpub\wwwroot\mysite\officelist.xml".
Further improvement to afg & eryksun's solution. The following piece of code returns a sorted list of all the factors without changing run time asymptotic complexity:
def factors(n):
l1, l2 = [], []
for i in range(1, int(n ** 0.5) + 1):
q,r = n//i, n%i # Alter: divmod() fn can be used.
if r == 0:
l1.append(i)
l2.append(q) # q's obtained are decreasing.
if l1[-1] == l2[-1]: # To avoid duplication of the possible factor sqrt(n)
l1.pop()
l2.reverse()
return l1 + l2
Idea: Instead of using the list.sort() function to get a sorted list which gives nlog(n) complexity; It is much faster to use list.reverse() on l2 which takes O(n) complexity. (That's how python is made.) After l2.reverse(), l2 may be appended to l1 to get the sorted list of factors.
Notice, l1 contains i-s which are increasing. l2 contains q-s which are decreasing. Thats the reason behind using the above idea.
You can check out this blog post. It had solved my problem.
http://dotnetguts.blogspot.com/2010/06/restore-failed-for-server-restore.html
Select @@Version
It had given me following output Microsoft SQL Server 2005 - 9.00.4053.00 (Intel X86) May 26 2009 14:24:20 Copyright (c) 1988-2005 Microsoft Corporation Express Edition on Windows NT 6.0 (Build 6002: Service Pack 2)
You will need to re-install to a new named instance to ensure that you are using the new SQL Server version.
Well, I use this method on my site:
<a class="share-btn" href="https://www.facebook.com/sharer/sharer.php?app_id=[your_app_id]&sdk=joey&u=[full_article_url]&display=popup&ref=plugin&src=share_button" onclick="return !window.open(this.href, 'Facebook', 'width=640,height=580')">
Works perfectly.
Try to use "n" the Node extremely simple package manager.
> npm install -g n
Once you have "n" installed. You can pull the latest node by doing the following:
> n latest
I've used it successfully on Ubuntu 16.0x and MacOS 10.12 (Sierra)
Reference: https://github.com/tj/n
Since Python 3.5, subprocess.run() is recommended over subprocess.check_output():
>>> int(subprocess.run(["pidof", "-s", "your_process"], stdout=subprocess.PIPE).stdout)
Also, since Python 3.7, you can use the capture_output=true
parameter to capture stdout and stderr:
>>> int(subprocess.run(["pidof", "-s", "your process"], capture_output=True).stdout)
You have several options:
JS
var json_arr = JSON.stringify(arr);
PHP
$arr = json_decode($_POST['arr']);
Sending an array via FormData
.
JS
// Use <#> or any other delimiter you want
var serial_arr = arr.join("<#>");
PHP
$arr = explode("<#>", $_POST['arr']);
To improve on the answer provided by @artofwarfare, here is what I consider a neater way to check for the message
attribute and print it or print the Exception
object as a fallback.
try:
pass
except Exception as e:
print getattr(e, 'message', repr(e))
The call to repr
is optional, but I find it necessary in some use cases.
Update #1:
Following the comment by @MadPhysicist, here's a proof of why the call to repr
might be necessary. Try running the following code in your interpreter:
try:
raise Exception
except Exception as e:
print(getattr(e, 'message', repr(e)))
print(getattr(e, 'message', str(e)))
Update #2:
Here is a demo with specifics for Python 2.7 and 3.5: https://gist.github.com/takwas/3b7a6edddef783f2abddffda1439f533
Lets say your HTML looks something like this:
<div class="box">
<!-- stuff -->
<p class="bet_time">Bet 5 days ago</p>
</div>
Then, with CSS, you can make that text appear in the bottom right like so:
.box {
position:relative;
}
.bet_time {
position:absolute;
bottom:0;
right:0;
}
The way this works is that absolutely positioned elements are always positioned with respect to the first relatively positioned parent element, or the window. Because we set the box's position to relative, .bet_time
positions its right edge to the right edge of .box
and its bottom edge to the bottom edge of .box
There are several possibilities, what's best depends on the text you work on.
Two possibilities come to mind:
V
, S-V
,
...), select the text with cursor
movement and press d
dap
Welcome to 2020 here is the answer in Python/Django:
from django.conf import settings
import boto3
s3 = boto3.client('s3')
s3.delete_object(Bucket=settings.AWS_STORAGE_BUCKET_NAME, Key=f"media/{item.file.name}")
Took me far too long to find the answer and it was as simple as this.
The answer is given but think that for some situation this will be also interesting way to get string from NSInteger
NSInteger value = 12;
NSString * string = [NSString stringWithFormat:@"%0.0f", (float)value];
This is holmberd answer with a "table header" implementation
function createTable(tableData) {
var table = document.createElement('table');
var header = document.createElement("tr");
// get first row to be header
var headers = tableData[0];
// create table header
headers.forEach(function(rowHeader){
var th = document.createElement("th");
th.appendChild(document.createTextNode(rowHeader));
header.appendChild(th);
});
console.log(headers);
// insert table header
table.append(header);
var row = {};
var cell = {};
// remove first how - header
tableData.shift();
tableData.forEach(function(rowData, index) {
row = table.insertRow();
console.log("indice: " + index);
rowData.forEach(function(cellData) {
cell = row.insertCell();
cell.textContent = cellData;
});
});
document.body.appendChild(table);
}
createTable([["row 1, cell 1", "row 1, cell 2"], ["row 2, cell 1", "row 2, cell 2"], ["row 3, cell 1", "row 3, cell 2"]]);
How about: if(($counter % $display) == 0)
def nans(df): return df[df.isnull().any(axis=1)]
then when ever you need it you can type:
nans(your_dataframe)
Check out this one, it's open source http://amateras.sourceforge.jp/cgi-bin/fswiki_en/wiki.cgi?page=EclipseHTMLEditor
I would prefer to store the image in a directory, then store a reference to the image file in the database.
However, if you do store the image in the database, you should partition your database so the image column resides in a separate file.
You can read more about using filegroups here http://msdn.microsoft.com/en-us/library/ms179316.aspx.
Very easy, trial and error. Go to the cell you want the month in. Type the Month, go to the next cell and type the year, something weird will come up but then go to your number section click on the little arrow in the right bottom and highlight text and it will change to the year you originally typed
<textarea name='Status'> </textarea>
<input type='button' value='Status Update'>
You have few problems with your code like using .
for concatenation
Try this -
$(function () {
$('input').on('click', function () {
var Status = $(this).val();
$.ajax({
url: 'Ajax/StatusUpdate.php',
data: {
text: $("textarea[name=Status]").val(),
Status: Status
},
dataType : 'json'
});
});
});
NOTE: Programs compiled with high-optimization levels (-O3, -O4) cannot generate many debugging symbols for optimized variables, in-lined functions and unrolled loops, regardless of the symbols being embedded (-g) or extracted (objcopy) into a '.debug' file.
Alternate approaches are
The first option provides a means to rebuild the production code with full debugging and symbols at a later date. Being able to re-build the original production code with no optimizations is a tremendous help for debugging. (NOTE: This assumes testing was done with the optimized version of the program).
Your build system can create a .c file loaded with the compile date, commit, and other VCS details. Here is a 'make + git' example:
program: program.o version.o
program.o: program.cpp program.h
build_version.o: build_version.c
build_version.c:
@echo "const char *build1=\"VCS: Commit: $(shell git log -1 --pretty=%H)\";" > "$@"
@echo "const char *build2=\"VCS: Date: $(shell git log -1 --pretty=%cd)\";" >> "$@"
@echo "const char *build3=\"VCS: Author: $(shell git log -1 --pretty="%an %ae")\";" >> "$@"
@echo "const char *build4=\"VCS: Branch: $(shell git symbolic-ref HEAD)\";" >> "$@"
# TODO: Add compiler options and other build details
.TEMPORARY: build_version.c
After the program is compiled you can locate the original 'commit' for your code by using the command: strings -a my_program | grep VCS
VCS: PROGRAM_NAME=my_program
VCS: Commit=190aa9cace3b12e2b58b692f068d4f5cf22b0145
VCS: BRANCH=refs/heads/PRJ123_feature_desc
VCS: AUTHOR=Joe Developer [email protected]
VCS: COMMIT_DATE=2013-12-19
All that is left is to check-out the original code, re-compile without optimizations, and start debugging.
I've built a module that makes in-memory pagination incredibly simple.
It allows you to paginate by simply replacing ng-repeat
with dir-paginate
, specifying the items per page as a piped filter, and then dropping the controls wherever you like in the form of a single directive, <dir-pagination-controls>
To take the original example asked by Tomarto, it would look like this:
<ul class='phones'>
<li class='thumbnail' dir-paginate='phone in phones | filter:searchBar | orderBy:orderProp | limitTo:limit | itemsPerPage: limit'>
<a href='#/phones/{{phone.id}}' class='thumb'><img ng-src='{{phone.imageUrl}}'></a>
<a href='#/phones/{{phone.id}}'>{{phone.name}}</a>
<p>{{phone.snippet}}</p>
</li>
</ul>
<dir-pagination-controls></dir-pagination-controls>
There is no need for any special pagination code in your controller. It's all handled internally by the module.
You said that
the browser gets the certificate's issuer information from that certificate, then uses that to contact the issuerer, and somehow compares certificates for validity.
The client doesn't have to check with the issuer because two things :
Notice that 2. can't be done without 1.
This is better explained in this big diagram I made some time ago
(skip to "what's a signature ?" at the bottom)
Make sure you really need to invert the matrix. This is often unnecessary and can be numerically unstable. When most people ask how to invert a matrix, they really want to know how to solve Ax = b where A is a matrix and x and b are vectors. It's more efficient and more accurate to use code that solves the equation Ax = b for x directly than to calculate A inverse then multiply the inverse by B. Even if you need to solve Ax = b for many b values, it's not a good idea to invert A. If you have to solve the system for multiple b values, save the Cholesky factorization of A, but don't invert it.
SELECT tt.*
FROM TestTable tt
INNER JOIN
(
SELECT coord, MAX(datetime) AS MaxDateTime
FROM rapsa
GROUP BY
krd
) groupedtt
ON tt.coord = groupedtt.coord
AND tt.datetime = groupedtt.MaxDateTime
Whether you're using SORT -UNIQUE
, SELECT -UNIQUE
or GET-UNIQUE
from Powershell 2.0 to 5.1, all the examples given are on single Column arrays. I have yet to get this to function across Arrays with multiple Columns to REMOVE Duplicate Rows to leave single occurrences of a Row across said Columns, or develop an alternative script solution. Instead these cmdlets have only returned Rows in an Array that occurred ONCE with singular occurrence and dumped everything that had a duplicate. Typically I have to Remove Duplicates manually from the final CSV output in Excel to finish the report, but sometimes I would like to continue working with said data within Powershell after removing the duplicates.
There's an example of how to use Font Awesome alongside an unordered list on their examples page.
<ul class="icons">
<li><i class="icon-ok"></i> Lists</li>
<li><i class="icon-ok"></i> Buttons</li>
<li><i class="icon-ok"></i> Button groups</li>
<li><i class="icon-ok"></i> Navigation</li>
<li><i class="icon-ok"></i> Prepended form inputs</li>
</ul>
If you can't find it working after trying this code then you're not including the library correctly. According to their website, you should include the libraries as such:
<link rel="stylesheet" href="../css/bootstrap.css">
<link rel="stylesheet" href="../css/font-awesome.css">
Also check out the whimsical Chris Coyier's post on icon fonts on his website CSS Tricks.
Here's a screencast by him as well talking about how to create your own icon font-face.
If your goal is output your array to a string for debbuging: you can use the print_r() function, which receives an expression parameter (your array), and an optional boolean return parameter. Normally the function is used to echo the array, but if you set the return parameter as true, it will return the array impression.
Example:
//We create a 2-dimension Array as an example
$ProductsArray = array();
$row_array['Qty'] = 20;
$row_array['Product'] = "Cars";
array_push($ProductsArray,$row_array);
$row_array2['Qty'] = 30;
$row_array2['Product'] = "Wheels";
array_push($ProductsArray,$row_array2);
//We save the Array impression into a variable using the print_r function
$ArrayString = print_r($ProductsArray, 1);
//You can echo the string
echo $ArrayString;
//or Log the string into a Log file
$date = date("Y-m-d H:i:s", time());
$LogFile = "Log.txt";
$fh = fopen($LogFile, 'a') or die("can't open file");
$stringData = "--".$date."\n".$ArrayString."\n";
fwrite($fh, $stringData);
fclose($fh);
This will be the output:
Array
(
[0] => Array
(
[Qty] => 20
[Product] => Cars
)
[1] => Array
(
[Qty] => 30
[Product] => Wheels
)
)
try this
this works for me... i added the css for my solution and it works... although i didnt change anything in datatable css except { border-collapse: separate;}
.dataTables_scrollHeadInner { /*for positioning header when scrolling is applied*/
padding:0% ! important
}
Here is a function I use:
var map = new google.maps.Map(document.getElementById('map'), {
center: new google.maps.LatLng(52.2, 5),
mapTypeId: google.maps.MapTypeId.ROADMAP,
zoom: 7
});
function zoomTo(level) {
google.maps.event.addListener(map, 'zoom_changed', function () {
zoomChangeBoundsListener = google.maps.event.addListener(map, 'bounds_changed', function (event) {
if (this.getZoom() > level && this.initialZoom == true) {
this.setZoom(level);
this.initialZoom = false;
}
google.maps.event.removeListener(zoomChangeBoundsListener);
});
});
}
You can delete the HEAD and restore your repository to a new state, where you can create a new initial commit:
git update-ref -d HEAD
After you create a new commit, if you have already pushed to remote, you will need to force it to the remote in order to overwrite the previous initial commit:
git push --force origin
To get the width and height of the viewport:
var viewportWidth = $(window).width();
var viewportHeight = $(window).height();
resize event of the page:
$(window).resize(function() {
});
import requests
response = requests.post('http://httpbin.org/post', data={'key1':'value1'})
print(response.request.url)
print(response.request.body)
print(response.request.headers)
Response
objects have a .request
property which is the original PreparedRequest
object that was sent.
Also note that if you want to apply an instance method to an array, you need to pass the function as:
call_user_func_array(array($instance, "MethodName"), $myArgs);
I have the same problem, and I have add this code to my application and its work fine for me.
step -1 : added the jquery.placeholder.js plugin
step -2 :write the below code in your area.
$(function () {
$('input, textarea').placeholder();
});
And now I can see placeholders on the input boxes!
This is what worked for me...
$('#dialog').live("dialogclose", function(){
//code to run on dialog close
});
The true problem here may be if you have a 32-bit system. MongoDB 3.X was never made to be used on a 32-bit system, so the repostories for 32-bit is empty (hence why it is not found). Installing the default 2.X Ubuntu package might be your best bet with:
sudo apt-get install -y mongodb
Another workaround, if you nevertheless want to get the latest version of Mongo:
You can go to https://www.mongodb.org/downloads and use the drop-down to select "Linux 32-bit legacy"
But it comes with severe limitations...
This 32-bit legacy distribution does not include SSL encryption and is limited to around 2GB of data. In general you should use the 64 bit builds. See here for more information.
Not sure if there was file corruption or what, but after confirming proper pom configuration I was able to resolve this issue by deleting the jar from my local m2 repository, forcing Maven to download it again when I ran the tests.
You can do it with collections.namedtuple
and itertools
:
import collections
import itertools
def Constants(Name, *Args, **Kwargs):
t = collections.namedtuple(Name, itertools.chain(Args, Kwargs.keys()))
return t(*itertools.chain(Args, Kwargs.values()))
>>> myConstants = Constants('MyConstants', 'One', 'Two', Three = 'Four')
>>> print myConstants.One
One
>>> print myConstants.Two
Two
>>> print myConstants.Three
Four
>>> myConstants.One = 'Two'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
Use the defined?
keyword (documentation). It will return a String with the kind of the item, or nil
if it doesn’t exist.
>> a = 1
=> 1
>> defined? a
=> "local-variable"
>> defined? b
=> nil
>> defined? nil
=> "nil"
>> defined? String
=> "constant"
>> defined? 1
=> "expression"
As skalee commented: "It is worth noting that variable which is set to nil is initialized."
>> n = nil
>> defined? n
=> "local-variable"
if you are on 64 bit os then you need to install this additional libraries.
sudo apt-get install lib32z1 lib32ncurses5 lib32bz2-1.0
#include <iostream>
class bar
{
public:
bar()
{
std::cout << "bar() called" << std::endl;
}
~bar()
{
std::cout << "~bar() called" << std::endl;
}
};
class foo
{
public:
foo()
: b(new bar())
{
std::cout << "foo() called" << std::endl;
throw "throw something";
}
~foo()
{
delete b;
std::cout << "~foo() called" << std::endl;
}
private:
bar *b;
};
int main(void)
{
try {
std::cout << "heap: new foo" << std::endl;
foo *f = new foo();
} catch (const char *e) {
std::cout << "heap exception: " << e << std::endl;
}
try {
std::cout << "stack: foo" << std::endl;
foo f;
} catch (const char *e) {
std::cout << "stack exception: " << e << std::endl;
}
return 0;
}
the output:
heap: new foo
bar() called
foo() called
heap exception: throw something
stack: foo
bar() called
foo() called
stack exception: throw something
the destructors are not called, so if a exception need to be thrown in a constructor, a lot of stuff(e.g. clean up?) to do.
Add double quotes to use caret operator in version number.
composer require middlewares/whoops "^0.4"
Since stop
is boolean you can change that part to:
//...
if(stop) // Or to: if (stop == true)
{
sb.append("y");
getWhoozitYs();
}
return sb.toString();
//...
You need to use a third party plugin like AndroidIcons Drawable Import to install this. Goto Android Studio > Prefrences > Plugins > and browse for AndroidIcons Drawable You can do things like
Restart android studio. If you do not have the drawables folder created, create it by importing any image as -"Action Bar and Tab Icons" & "Notification Icons",. Then right clink on the file explorer and you can see 4 options in the new tab. Use any one according to your need.
There is 2 way for sort data
1) sorting just data and fill into grid:
DataGridView datagridview1 = new DataGridView(); // for show data
DataTable dt1 = new DataTable(); // have data
DataTable dt2 = new DataTable(); // temp data table
DataRow[] dra = dt1.Select("", "ID DESC");
if (dra.Length > 0)
dt2 = dra.CopyToDataTable();
datagridview1.DataSource = dt2;
2) sort default view that is like of sort with grid column header:
DataGridView datagridview1 = new DataGridView(); // for show data
DataTable dt1 = new DataTable(); // have data
dt1.DefaultView.Sort = "ID DESC";
datagridview1.DataSource = dt1;
I think best way to avoid the OutOfMemoryError
is to face it and understand it.
I made an app to intentionally cause OutOfMemoryError
, and monitor memory usage.
After I've done a lot of experiments with this App, I've got the following conclusions:
I'm gonna talk about SDK versions before Honey Comb first.
Bitmap is stored in native heap, but it will get garbage collected automatically, calling recycle() is needless.
If {VM heap size} + {allocated native heap memory} >= {VM heap size limit for the device}, and you are trying to create bitmap, OOM will be thrown.
NOTICE: VM HEAP SIZE is counted rather than VM ALLOCATED MEMORY.
VM Heap size will never shrink after grown, even if the allocated VM memory is shrinked.
So you have to keep the peak VM memory as low as possible to keep VM Heap Size from growing too big to save available memory for Bitmaps.
Manually call System.gc() is meaningless, the system will call it first before trying to grow the heap size.
Native Heap Size will never shrink too, but it's not counted for OOM, so no need to worry about it.
Then, let's talk about SDK Starts from Honey Comb.
Bitmap is stored in VM heap, Native memory is not counted for OOM.
The condition for OOM is much simpler: {VM heap size} >= {VM heap size limit for the device}.
So you have more available memory to create bitmap with the same heap size limit, OOM is less likely to be thrown.
Here is some of my observations about Garbage Collection and Memory Leak.
You can see it yourself in the App. If an Activity executed an AsyncTask that was still running after the Activity was destroyed, the Activity will not get garbage collected until the AsyncTask finish.
This is because AsyncTask is an instance of an anonymous inner class, it holds a reference of the Activity.
Calling AsyncTask.cancel(true) will not stop the execution if the task is blocked in an IO operation in background thread.
Callbacks are anonymous inner classes too, so if a static instance in your project holds them and do not release them, memory would be leaked.
If you scheduled a repeating or delayed task, for example a Timer, and you do not call cancel() and purge() in onPause(), memory would be leaked.
There is a way to achieve this without creating a function or using an external tool. By using Postgres' query_to_xml()
function that can dynamically run a query inside another query, it's possible to search a text across many tables. This is based on my answer to retrieve the rowcount for all tables:
To search for the string foo
across all tables in a schema, the following can be used:
with found_rows as (
select format('%I.%I', table_schema, table_name) as table_name,
query_to_xml(format('select to_jsonb(t) as table_row
from %I.%I as t
where t::text like ''%%foo%%'' ', table_schema, table_name),
true, false, '') as table_rows
from information_schema.tables
where table_schema = 'public'
)
select table_name, x.table_row
from found_rows f
left join xmltable('//table/row'
passing table_rows
columns
table_row text path 'table_row') as x on true
Note that the use of xmltable
requires Postgres 10 or newer. For older Postgres version, this can be also done using xpath().
with found_rows as (
select format('%I.%I', table_schema, table_name) as table_name,
query_to_xml(format('select to_jsonb(t) as table_row
from %I.%I as t
where t::text like ''%%foo%%'' ', table_schema, table_name),
true, false, '') as table_rows
from information_schema.tables
where table_schema = 'public'
)
select table_name, x.table_row
from found_rows f
cross join unnest(xpath('/table/row/table_row/text()', table_rows)) as r(data)
The common table expression (WITH ...
) is only used for convenience. It loops through all tables in the public
schema. For each table the following query is run through the query_to_xml()
function:
select to_jsonb(t)
from some_table t
where t::text like '%foo%';
The where clause is used to make sure the expensive generation of XML content is only done for rows that contain the search string. This might return something like this:
<table xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<row>
<table_row>{"id": 42, "some_column": "foobar"}</table_row>
</row>
</table>
The conversion of the complete row to jsonb
is done, so that in the result one could see which value belongs to which column.
The above might return something like this:
table_name | table_row
-------------+----------------------------------------
public.foo | {"id": 1, "some_column": "foobar"}
public.bar | {"id": 42, "another_column": "barfoo"}
Well, it's too late to post an answer, but I feel my explanation will be easy to understand for beginners with Angular. The following is one of the examples that I give during my presentation.
Consider your angular Application as a building. A building can have N
number of apartments in it. An apartment is considered as a module. An Apartment can then have N
number of rooms which correspond to the building blocks of an Angular application named components.
Now each apartment (Module)` will have rooms (Components), lifts (Services) to enable larger movement in and out the apartments, wires (Pipes) to transform around and make it useful in the apartments.
You will also have places like swimming pool, tennis court which are being shared by all building residents. So these can be considered as components inside SharedModule.
Basically, the difference is as follows,
Follow my slides to understand the building blocks of an Angular application
Here is my session on Building Blocks of Angular for beginners
for mysql version 5
alter table *table_name* change column *old_column_name* *new_column_name* datatype();
I found (probably) a solution
In the process of downloading a few thousand log files from one server to the next I suddenly had the need to do some serious multithreaded downloading in BSD, preferably with Wget as that was the simplest way I could think of handling this. A little looking around led me to this little nugget:
wget -r -np -N [url] & wget -r -np -N [url] & wget -r -np -N [url] & wget -r -np -N [url]
Just repeat the
wget -r -np -N [url]
for as many threads as you need... Now given this isn’t pretty and there are surely better ways to do this but if you want something quick and dirty it should do the trick...
Note: the option -N
makes wget
download only "newer" files, which means it won't overwrite or re-download files unless their timestamp changes on the server.
This way works:
class A
{
struct Wrap
{
A& a;
Wrap(A& aa) aa(a) {}
operator int() { return a.value; }
operator std::string() { stringstream ss; ss << a.value; return ss.str(); }
}
Wrap operator*() { return Wrap(*this); }
};
Here you go: Working Fidddle
$(function(){
var dateFormat = 'DD-MM-YYYY';
alert(moment(moment("2012-10-19").format(dateFormat),dateFormat,true).isValid());
});
parseInt()
is just going to throw an exception if the parsing can't complete successfully. You can instead use Integers
, the corresponding object type, which makes things a little bit cleaner. So you probably want something closer to:
Integer s = null;
try {
s = Integer.valueOf(startField.getText());
}
catch (NumberFormatException e) {
// ...
}
if (s != null) { ... }
Beware if you do decide to use parseInt()
! parseInt()
doesn't support good internationalization, so you have to jump through even more hoops:
try {
NumberFormat nf = NumberFormat.getIntegerInstance(locale);
nf.setParseIntegerOnly(true);
nf.setMaximumIntegerDigits(9); // Or whatever you'd like to max out at.
// Start parsing from the beginning.
ParsePosition p = new ParsePosition(0);
int val = format.parse(str, p).intValue();
if (p.getIndex() != str.length()) {
// There's some stuff after all the digits are done being processed.
}
// Work with the processed value here.
} catch (java.text.ParseFormatException exc) {
// Something blew up in the parsing.
}
Under Windows 7, open the Event Viewer. You can do this the way Gishu suggested for XP, typing eventvwr
from the command line, or by opening the Control Panel, selecting System and Security, then Administrative Tools and finally Event Viewer. It may require UAC approval or an admin password.
In the left pane, expand Windows Logs and then System. You can filter the logs with Filter Current Log... from the Actions pane on the right and selecting "Service Control Manager." Or, depending on why you want this information, you might just need to look through the Error entries.
The actual log entry pane (not shown) is pretty user-friendly and self-explanatory. You'll be looking for messages like the following:
"The Praxco Assistant service entered the stopped state."
"The Windows Image Acquisition (WIA) service entered the running state."
"The MySQL service terminated unexpectedly. It has done this 3 time(s)."
I had the same problem: I love the resolve
object, but that only works for the content of ng-view. What if you have controllers (for top-level nav, let's say) that exist outside of ng-view and which need to be initialized with data before the routing even begins to happen? How do we avoid mucking around on the server-side just to make that work?
Use manual bootstrap and an angular constant. A naiive XHR gets you your data, and you bootstrap angular in its callback, which deals with your async issues. In the example below, you don't even need to create a global variable. The returned data exists only in angular scope as an injectable, and isn't even present inside of controllers, services, etc. unless you inject it. (Much as you would inject the output of your resolve
object into the controller for a routed view.) If you prefer to thereafter interact with that data as a service, you can create a service, inject the data, and nobody will ever be the wiser.
Example:
//First, we have to create the angular module, because all the other JS files are going to load while we're getting data and bootstrapping, and they need to be able to attach to it.
var MyApp = angular.module('MyApp', ['dependency1', 'dependency2']);
// Use angular's version of document.ready() just to make extra-sure DOM is fully
// loaded before you bootstrap. This is probably optional, given that the async
// data call will probably take significantly longer than DOM load. YMMV.
// Has the added virtue of keeping your XHR junk out of global scope.
angular.element(document).ready(function() {
//first, we create the callback that will fire after the data is down
function xhrCallback() {
var myData = this.responseText; // the XHR output
// here's where we attach a constant containing the API data to our app
// module. Don't forget to parse JSON, which `$http` normally does for you.
MyApp.constant('NavData', JSON.parse(myData));
// now, perform any other final configuration of your angular module.
MyApp.config(['$routeProvider', function ($routeProvider) {
$routeProvider
.when('/someroute', {configs})
.otherwise({redirectTo: '/someroute'});
}]);
// And last, bootstrap the app. Be sure to remove `ng-app` from your index.html.
angular.bootstrap(document, ['NYSP']);
};
//here, the basic mechanics of the XHR, which you can customize.
var oReq = new XMLHttpRequest();
oReq.onload = xhrCallback;
oReq.open("get", "/api/overview", true); // your specific API URL
oReq.send();
})
Now, your NavData
constant exists. Go ahead and inject it into a controller or service:
angular.module('MyApp')
.controller('NavCtrl', ['NavData', function (NavData) {
$scope.localObject = NavData; //now it's addressable in your templates
}]);
Of course, using a bare XHR object strips away a number of the niceties that $http
or JQuery would take care of for you, but this example works with no special dependencies, at least for a simple get
. If you want a little more power for your request, load up an external library to help you out. But I don't think it's possible to access angular's $http
or other tools in this context.
(SO related post)
git reset --hard origin/master
says: throw away all my staged and unstaged changes, forget everything on my current local branch and make it exactly the same as origin/master
.
You probably wanted to ask this before you ran the command. The destructive nature is hinted at by using the same words as in "hard reset".
You can also execute the following code. It is tested and working.
import com.jcraft.jsch.Channel;
import com.jcraft.jsch.JSch;
import com.jcraft.jsch.JSchException;
import com.jcraft.jsch.Session;
import com.jcraft.jsch.UIKeyboardInteractive;
import com.jcraft.jsch.UserInfo;
public class SFTPTest {
public static void main(String[] args) {
JSch jsch = new JSch();
Session session = null;
try {
session = jsch.getSession("username", "mywebsite.com", 22); //default port is 22
UserInfo ui = new MyUserInfo();
session.setUserInfo(ui);
session.setPassword("123456".getBytes());
session.connect();
Channel channel = session.openChannel("sftp");
channel.connect();
System.out.println("Connected");
} catch (JSchException e) {
e.printStackTrace(System.out);
} catch (Exception e){
e.printStackTrace(System.out);
} finally{
session.disconnect();
System.out.println("Disconnected");
}
}
public static class MyUserInfo implements UserInfo, UIKeyboardInteractive {
@Override
public String getPassphrase() {
return null;
}
@Override
public String getPassword() {
return null;
}
@Override
public boolean promptPassphrase(String arg0) {
return false;
}
@Override
public boolean promptPassword(String arg0) {
return false;
}
@Override
public boolean promptYesNo(String arg0) {
return false;
}
@Override
public void showMessage(String arg0) {
}
@Override
public String[] promptKeyboardInteractive(String arg0, String arg1,
String arg2, String[] arg3, boolean[] arg4) {
return null;
}
}
}
Please substitute the appropriate values.
Query to show a 3-variable boolean expression truth table :
;WITH cteData AS
(SELECT 0 AS A, 0 AS B, 0 AS C
UNION ALL SELECT 0,0,1
UNION ALL SELECT 0,1,0
UNION ALL SELECT 0,1,1
UNION ALL SELECT 1,0,0
UNION ALL SELECT 1,0,1
UNION ALL SELECT 1,1,0
UNION ALL SELECT 1,1,1
)
SELECT cteData.*,
CASE WHEN
(A=1) OR (B=1) AND (C=1)
THEN 'True' ELSE 'False' END AS Result
FROM cteData
Results for (A=1) OR (B=1) AND (C=1)
:
A B C Result
0 0 0 False
0 0 1 False
0 1 0 False
0 1 1 True
1 0 0 True
1 0 1 True
1 1 0 True
1 1 1 True
Results for (A=1) OR ( (B=1) AND (C=1) )
are the same.
Results for ( (A=1) OR (B=1) ) AND (C=1)
:
A B C Result
0 0 0 False
0 0 1 False
0 1 0 False
0 1 1 True
1 0 0 False
1 0 1 True
1 1 0 False
1 1 1 True
Denis's answer made me think to another solution using JsonObjects,
a header property checker:
Predicate<object> hasHeader = jsonObject =>
((JObject)jsonObject).OfType<JProperty>()
.Any(prop => prop.Name == "header");
or maybe better:
Predicate<object> hasHeader = jsonObject =>
((JObject)jsonObject).Property("header") != null;
for example:
dynamic json = JsonConvert.DeserializeObject(data);
string header = hasHeader(json) ? json.header : null;
Just for fun, here is a GOTO implementation in Java.
Example:
1 public class GotoDemo { 2 public static void main(String[] args) { 3 int i = 3; 4 System.out.println(i); 5 i = i - 1; 6 if (i >= 0) { 7 GotoFactory.getSharedInstance().getGoto().go(4); 8 } 9 10 try { 11 System.out.print("Hell"); 12 if (Math.random() > 0) throw new Exception(); 13 System.out.println("World!"); 14 } catch (Exception e) { 15 System.out.print("o "); 16 GotoFactory.getSharedInstance().getGoto().go(13); 17 } 18 } 19 }
Running it:
$ java -cp bin:asm-3.1.jar GotoClassLoader GotoDemo 3 2 1 0 Hello World!
Do I need to add "don't use it!"?
You can solve it like this:
git reset --hard sha
where sha
e.g.: 85a108ec5d8443626c690a84bc7901195d19c446
You can get the desired sha with the command:
git log
It's not in the language core, but a very mature library that does what you want is Twisted. It introduces the Deferred object, which you can attach callbacks or error handlers ("errbacks") to. A Deferred is basically a "promise" that a function will have a result eventually.
select result from (
select vorsteuerid as result, count(*) as anzahl from kreditorenrechnung where kundeid = 7148
group by vorsteuerid
) a order by anzahl desc limit 0,1
The simplest way to get tooltips in most browsers is to set some text in the title attribute.
eg.
<img src="myimage.jpg" alt="a cat" title="My cat sat on a table" />
produces (hover your mouse over the image):
a cat http://www.imagechicken.com/uploads/1275939952008633500.jpg
Title attributes can be applied to most HTML elements.
Here's a possible way to do it:
$string=exec('getmac');
$mac=substr($string, 0, 17);
echo $mac;
I couldn't get the top two answers to work, and none of the other answers were helpful to me. So I paid three people $30 from Reddit r/forhire and Upwork and got some really good answers. This answer should save you $90.
HTML
<div id="chatscreen">
<div id="inner">
</div>
</div>
CSS
#chatscreen {
width: 300px;
overflow-y: scroll;
max-height:100px;
}
Javascript
$(function(){
var scrolled = false;
var lastScroll = 0;
var count = 0;
$("#chatscreen").on("scroll", function() {
var nextScroll = $(this).scrollTop();
if (nextScroll <= lastScroll) {
scrolled = true;
}
lastScroll = nextScroll;
console.log(nextScroll, $("#inner").height())
if ((nextScroll + 100) == $("#inner").height()) {
scrolled = false;
}
});
function updateScroll(){
if(!scrolled){
var element = document.getElementById("chatscreen");
var inner = document.getElementById("inner");
element.scrollTop = inner.scrollHeight;
}
}
// Now let's load our messages
function load_messages(){
$( "#inner" ).append( "Test" + count + "<br/>" );
count = count + 1;
updateScroll();
}
setInterval(load_messages,300);
});
Preview the site bros' solution
HTML
<div id="chatscreen">
</div>
CSS
#chatscreen {
height: 300px;
border: 1px solid purple;
overflow: scroll;
}
Javascript
$(function(){
var isScrolledToBottom = false;
// Now let's load our messages
function load_messages(){
$( "#chatscreen" ).append( "<br>Test" );
updateScr();
}
var out = document.getElementById("chatscreen");
var c = 0;
$("#chatscreen").on('scroll', function(){
console.log(out.scrollHeight);
isScrolledToBottom = out.scrollHeight - out.clientHeight <= out.scrollTop + 10;
});
function updateScr() {
// allow 1px inaccuracy by adding 1
//console.log(out.scrollHeight - out.clientHeight, out.scrollTop + 1);
var newElement = document.createElement("div");
newElement.innerHTML = c++;
out.appendChild(newElement);
console.log(isScrolledToBottom);
// scroll to bottom if isScrolledToBotto
if(isScrolledToBottom) {out.scrollTop = out.scrollHeight - out.clientHeight; }
}
var add = setInterval(updateScr, 1000);
setInterval(load_messages,300); // change to 300 to show the latest message you sent after pressing enter // comment this line and it works, uncomment and it fails
// leaving it on 1000 shows the second to last message
setInterval(updateScroll,30);
});
HTML
<div id="chatscreen"></div>
CSS
#chatscreen {
height: 100px;
overflow: scroll;
border: 1px solid #000;
}
Javascript
$(function(){
// Now let's load our messages
function load_messages(){
$( "#chatscreen" ).append( "<br>Test" );
}
var out = document.getElementById("chatscreen");
var c = 0;
var add = setInterval(function() {
// allow 1px inaccuracy by adding 1
var isScrolledToBottom = out.scrollHeight - out.clientHeight <= out.scrollTop + 1;
load_messages();
// scroll to bottom if isScrolledToBotto
if(isScrolledToBottom) {out.scrollTop = out.scrollHeight - out.clientHeight; }
}, 1000);
setInterval(updateScroll,30);
});
uint32_t
is defined in the standard, in
<cstdint>
synopsis [cstdint.syn]namespace std {
//...
typedef unsigned integer type uint32_t; // optional
//...
}
uint32
is not, it's a shortcut provided by some compilers (probably as typedef uint32_t uint32
) for ease of use.
This works well
To answer your question:
In my Dockerfile, I would like to define variables that I can use later in the Dockerfile.
You can define a variable with:
ARG myvalue=3
Spaces around the equal character are not allowed.
And use it later with:
RUN echo $myvalue > /test
You need to use overflow option like below:
.nav{
max-height: 300px;
overflow-y: scroll;
}
Change the height according to amount of items you need to show
The problem can be seen when checking the remote.origin.fetch
setting
(The lines starting with $
are bash prompts with the commands I typed. The other lines are the resulting output)
$ git config --get remote.origin.fetch
+refs/heads/master:refs/remotes/origin/master
As you can see, in my case, the remote was set to fetch the master branch specifically and only. I fixed it as per below, including the second command to check the results.
$ git config remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*"
$ git config --get remote.origin.fetch
+refs/heads/*:refs/remotes/origin/*
The wildcard *
of course means everything under that path.
Unfortunately I saw this comment after I had already dug through and found the answer by trial and error.
It appears you have illegal characters in your source. I cannot figure out what character \240
should be but apparently it is around the start of line 10
In the code you posted, the issue does not exist: Live On Coliru
You can presumably pass-in, as a parameter, a delegate which will convert from string to T.
Following worked on M1
ProductName: macOS
ProductVersion: 11.2.1
BuildVersion: 20D74
% xcode-select --install
Agree the Terms and Conditions prompt, it will return following message on success.
% xcode-select: note: install requested for command line developer tools
The ES6 approach is very clean. So first you initialize an array of x length, and then call the fill
method on it.
let arr = new Array(3).fill(9)
this will create an array with 3 elements like:
[9, 9, 9]
I'd like to add the following to Shay Levy's correct answer:
You can make your life easier if you create a little batch script run.cmd
to launch your powershell script:
@echo off & setlocal
set batchPath=%~dp0
powershell.exe -noexit -file "%batchPath%SQLExecutor.ps1" "MY-PC"
Put it in the same path as SQLExecutor.ps1
and from now on you can run it by simply double-clicking on run.cmd
.
Note:
If you require command line arguments inside the run.cmd batch, simply pass them as %1
... %9
(or use %*
to pass all parameters) to the powershell script, i.e.
powershell.exe -noexit -file "%batchPath%SQLExecutor.ps1" %*
The variable batchPath
contains the executing path of the batch file itself (this is what the expression %~dp0
is used for). So you just put the powershell script in the same path as the calling batch file.
Perhaps setting both of the buttons layout_width properties to "fill_parent" will do the trick.
I just tested this code and it works in the emulator:
<LinearLayout android:layout_width="fill_parent"
android:layout_height="wrap_content">
<Button android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_weight="1"
android:text="hello world"/>
<Button android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_weight="1"
android:text="goodbye world"/>
</LinearLayout>
Be sure to set layout_width to "fill_parent" on both buttons.
also it happens when you import following library to api.php file. this happens by some IDE's suggestion to import it for not finding the Route Class.
just remove it and everything going to work fine.
use Illuminate\Routing\Route;
update:
seems if you add this library it wont lead to error
use Illuminate\Support\Facades\Route;
Dismiss the dialog when activity destroy
@Override
protected void onDestroy()
{
super.onDestroy();
if (pDialog!=null && pDialog.isShowing()){
pDialog.dismiss();
}
}
What about asynchronous object cloning done by a Promise
?
async function clone(thingy /**/)
{
if(thingy instanceof Promise)
{
throw Error("This function cannot clone Promises.");
}
return thingy;
}
In Swift 3:
webView.loadHTMLString("<img src=\"myImg.jpg\">", baseURL: Bundle.main.bundleURL)
This worked for me even when the image was inside of a folder without any modifications.
This is what I used.
$('#foo').popover({
placement : 'bottom',
title : 'Title',
content : '<div id="popOverBox"><img src="http://i.telegraph.co.uk/multimedia/archive/01515/alGore_1515233c.jpg" /></div>'
});
and for the HTML
<b id="foo" rel="popover">text goes here</b>
Just change the SqlCommand.CommandText
instead of creating a new SqlCommand
every time. There is no need to close and reopen the connection.
// Create the first command and execute
var command = new SqlCommand("<SQL Command>", myConnection);
var reader = command.ExecuteReader();
// Change the SQL Command and execute
command.CommandText = "<New SQL Command>";
command.ExecuteNonQuery();
I wrote the following class to get YouTube video duration using the YouTube API v3 (it returns thumbnails as well):
class Youtube
{
static $api_key = '<API_KEY>';
static $api_base = 'https://www.googleapis.com/youtube/v3/videos';
static $thumbnail_base = 'https://i.ytimg.com/vi/';
// $vid - video id in youtube
// returns - video info
public static function getVideoInfo($vid)
{
$params = array(
'part' => 'contentDetails',
'id' => $vid,
'key' => self::$api_key,
);
$api_url = Youtube::$api_base . '?' . http_build_query($params);
$result = json_decode(@file_get_contents($api_url), true);
if(empty($result['items'][0]['contentDetails']))
return null;
$vinfo = $result['items'][0]['contentDetails'];
$interval = new DateInterval($vinfo['duration']);
$vinfo['duration_sec'] = $interval->h * 3600 + $interval->i * 60 + $interval->s;
$vinfo['thumbnail']['default'] = self::$thumbnail_base . $vid . '/default.jpg';
$vinfo['thumbnail']['mqDefault'] = self::$thumbnail_base . $vid . '/mqdefault.jpg';
$vinfo['thumbnail']['hqDefault'] = self::$thumbnail_base . $vid . '/hqdefault.jpg';
$vinfo['thumbnail']['sdDefault'] = self::$thumbnail_base . $vid . '/sddefault.jpg';
$vinfo['thumbnail']['maxresDefault'] = self::$thumbnail_base . $vid . '/maxresdefault.jpg';
return $vinfo;
}
}
Please note that you'll need API_KEY to use the YouTube API:
The normal way to control this is with git config
For example
git config --global core.autocrlf true
For details, scroll down in this link to Pro Git to the section named "core.autocrlf"
If you want to know what file this is saved in, you can run the command:
git config --global --edit
and the git global config file should open in a text editor, and you can see where that file was loaded from.
For some number y
and some divisor x
compute the quotient (quotient
) and remainder (remainder
) as:
var quotient = Math.floor(y/x);
var remainder = y % x;
In windows 7, you may have to edit this file
C:\Users\<UserName>\AppData\Roaming\Subversion\servers
[global]
http-proxy-host = ip.add.re.ss
http-proxy-port = 3128
few Unicode alternatives
var s = "0";
var s1 = string.Concat(s.Select(c => $"{(int)c:x4}")); // left padded with 0 - "0030d835dfcfd835dfdad835dfe5d835dff0d835dffb"
var sL = BitConverter.ToString(Encoding.Unicode.GetBytes(s)).Replace("-", ""); // Little Endian "300035D8CFDF35D8DADF35D8E5DF35D8F0DF35D8FBDF"
var sB = BitConverter.ToString(Encoding.BigEndianUnicode.GetBytes(s)).Replace("-", ""); // Big Endian "0030D835DFCFD835DFDAD835DFE5D835DFF0D835DFFB"
// no encodding "300035D8CFDF35D8DADF35D8E5DF35D8F0DF35D8FBDF"
byte[] b = new byte[s.Length * sizeof(char)];
Buffer.BlockCopy(s.ToCharArray(), 0, b, 0, b.Length);
var sb = BitConverter.ToString(b).Replace("-", "");
The selected answer makes it sound slightly more complex than it is, the following is what you need (install npm first on your system).
npm install -g elasticdump
elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=mapping
elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=data
You can skip the first elasticdump command for subsequent copies if the mappings remain constant.
I have just done a migration from AWS to Qbox.io with the above without any problems.
More details over at:
https://www.npmjs.com/package/elasticdump
Help page (as of Feb 2016) included for completeness:
elasticdump: Import and export tools for elasticsearch
Usage: elasticdump --input SOURCE --output DESTINATION [OPTIONS]
--input
Source location (required)
--input-index
Source index and type
(default: all, example: index/type)
--output
Destination location (required)
--output-index
Destination index and type
(default: all, example: index/type)
--limit
How many objects to move in bulk per operation
limit is approximate for file streams
(default: 100)
--debug
Display the elasticsearch commands being used
(default: false)
--type
What are we exporting?
(default: data, options: [data, mapping])
--delete
Delete documents one-by-one from the input as they are
moved. Will not delete the source index
(default: false)
--searchBody
Preform a partial extract based on search results
(when ES is the input,
(default: '{"query": { "match_all": {} } }'))
--sourceOnly
Output only the json contained within the document _source
Normal: {"_index":"","_type":"","_id":"", "_source":{SOURCE}}
sourceOnly: {SOURCE}
(default: false)
--all
Load/store documents from ALL indexes
(default: false)
--bulk
Leverage elasticsearch Bulk API when writing documents
(default: false)
--ignore-errors
Will continue the read/write loop on write error
(default: false)
--scrollTime
Time the nodes will hold the requested search in order.
(default: 10m)
--maxSockets
How many simultaneous HTTP requests can we process make?
(default:
5 [node <= v0.10.x] /
Infinity [node >= v0.11.x] )
--bulk-mode
The mode can be index, delete or update.
'index': Add or replace documents on the destination index.
'delete': Delete documents on destination index.
'update': Use 'doc_as_upsert' option with bulk update API to do partial update.
(default: index)
--bulk-use-output-index-name
Force use of destination index name (the actual output URL)
as destination while bulk writing to ES. Allows
leveraging Bulk API copying data inside the same
elasticsearch instance.
(default: false)
--timeout
Integer containing the number of milliseconds to wait for
a request to respond before aborting the request. Passed
directly to the request library. If used in bulk writing,
it will result in the entire batch not being written.
Mostly used when you don't care too much if you lose some
data when importing but rather have speed.
--skip
Integer containing the number of rows you wish to skip
ahead from the input transport. When importing a large
index, things can go wrong, be it connectivity, crashes,
someone forgetting to `screen`, etc. This allows you
to start the dump again from the last known line written
(as logged by the `offset` in the output). Please be
advised that since no sorting is specified when the
dump is initially created, there's no real way to
guarantee that the skipped rows have already been
written/parsed. This is more of an option for when
you want to get most data as possible in the index
without concern for losing some rows in the process,
similar to the `timeout` option.
--inputTransport
Provide a custom js file to us as the input transport
--outputTransport
Provide a custom js file to us as the output transport
--toLog
When using a custom outputTransport, should log lines
be appended to the output stream?
(default: true, except for `$`)
--help
This page
Examples:
# Copy an index from production to staging with mappings:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=mapping
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=data
# Backup index data to a file:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index_mapping.json \
--type=mapping
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index.json \
--type=data
# Backup and index to a gzip using stdout:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=$ \
| gzip > /data/my_index.json.gz
# Backup ALL indices, then use Bulk API to populate another ES cluster:
elasticdump \
--all=true \
--input=http://production-a.es.com:9200/ \
--output=/data/production.json
elasticdump \
--bulk=true \
--input=/data/production.json \
--output=http://production-b.es.com:9200/
# Backup the results of a query to a file
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=query.json \
--searchBody '{"query":{"term":{"username": "admin"}}}'
------------------------------------------------------------------------------
Learn more @ https://github.com/taskrabbit/elasticsearch-dump`enter code here`
window.location
adds an item to your history in that you can (or should be able to) click "Back" and go back to the current page.
window.location.replace
replaces the current history item so you can't go back to it.
See window.location
:
assign(url)
: Load the document at the provided URL.
replace(url)
:Replace the current document with the one at the provided URL. The difference from theassign()
method is that after usingreplace()
the current page will not be saved in session history, meaning the user won't be able to use the Back button to navigate to it.
Oh and generally speaking:
window.location.href = url;
is favoured over:
window.location = url;
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost'))
channel = connection.channel()
channel.queue_delete(queue='queue-name')
connection.close()
Install pika package as follows
$ sudo pip install pika==0.9.8
The installation depends on pip and git-core packages, you may need to install them first.
On Ubuntu:
$ sudo apt-get install python-pip git-core
On Debian:
$ sudo apt-get install python-setuptools git-core
$ sudo easy_install pip
On Windows: To install easy_install, run the MS Windows Installer for setuptools
> easy_install pip
> pip install pika==0.9.8
You need to set the height of html
to 100%
body {
background-image:url("../images/myImage.jpg");
background-repeat: no-repeat;
background-size: 100% 100%;
}
html {
height: 100%
}
NSData *data = [strChangetoJSON dataUsingEncoding:NSUTF8StringEncoding];
NSDictionary *jsonResponse = [NSJSONSerialization JSONObjectWithData:data
options:kNilOptions
error:&error];
For example you have a NSString
with special characters in NSString
strChangetoJSON.
Then you can convert that string to JSON response using above code.
'parent.relativePath' points at wrong local POM @ myGroup:myParentArtifactId:1.0, C:\myProjectDir\parent\pom.xml
This indicates that maven did search locally for the parent pom, but found that it was not the correct pom.
pom.xml
of parentpom
correctly define the parent
pom as the pom.xml
of rootpom
?rootpom
folder contain pom.xml
as well as the paretpom
folder?import cv2 as cv
im_color = cv.imread("lena.png", cv.IMREAD_COLOR)
im_gray = cv.cvtColor(im_color, cv.COLOR_BGR2GRAY)
At this point you have a color and a gray image. We are dealing with 8-bit
, uint8
images here. That means the images can have pixel values in the range of [0, 255]
and the values have to be integers.
Let's do a binary thresholding operation. It creates a black and white masked image. The black regions have value 0
and the white regions 255
_, mask = cv.threshold(im_gray, thresh=180, maxval=255, type=cv.THRESH_BINARY)
im_thresh_gray = cv.bitwise_and(im_gray, mask)
The mask can be seen below on the left. The image on it's right is the result of applying bitwise_and
operation between the gray image and the mask. What happened is, the spatial locations where the mask had a pixel value zero (black), became pixel value zero in the result image. The locations where the mask had pixel value 255 (white), the resulting image retained it's original gray value.
To apply this mask to our original color image, we need to convert the mask into a 3 channel image as the original color image is a 3 channel image.
mask3 = cv.cvtColor(mask, cv.COLOR_GRAY2BGR) # 3 channel mask
Then, we can apply this 3 channel mask to our color image using the same bitwise_and
function.
im_thresh_color = cv.bitwise_and(im_color, mask3)
mask3
from the code is the image below on the left, and im_thresh_color
is on its right.
You can plot the results and see for yourself.
cv.imshow("original image", im_color)
cv.imshow("binary mask", mask)
cv.imshow("3 channel mask", mask3)
cv.imshow("im_thresh_gray", im_thresh_gray)
cv.imshow("im_thresh_color", im_thresh_color)
cv.waitKey(0)
The original image is lenacolor.png
that I found here.
For me the issue got resolved by doing the following:
Navigated to Tool --> Options --> Project and Solutions --> Web Projects
I could find the first check box "Use the 64 bit version of IIS Express of web sites and projects" was unchecked.
Selecting this check box helped me in launching the WCF Client.
VS Version : VS2019
If someone is using react, following will be useful:
https://stackoverflow.com/a/62111884/1015678
const valueSetter = Object.getOwnPropertyDescriptor(this.textInputRef, 'value').set;
const prototype = Object.getPrototypeOf(this.textInputRef);
const prototypeValueSetter = Object.getOwnPropertyDescriptor(prototype, 'value').set;
if (valueSetter && valueSetter !== prototypeValueSetter) {
prototypeValueSetter.call(this.textInputRef, 'new value');
} else {
valueSetter.call(this.textInputRef, 'new value');
}
this.textInputRef.dispatchEvent(new Event('input', { bubbles: true }));
As most good PHP programmers like to use XDebug to actually step through running code and watch variables change in real-time, using dump()
feels like a step back to the bad old days.
That's why I made a Twig Debug extension and put it on Github.
https://github.com/delboy1978uk/twig-debug
composer require delboy1978uk/twig-debug
Then add the extension. If you aren't using Symfony, like this:
<?php
use Del\Twig\DebugExtension;
/** @var $twig Twig_Environment */
$twig->addExtension(new DebugExtension());
If you are, like this in your services YAML config:
twig_debugger:
class: Del\Twig\DebugExtension
tags:
- { name: twig.extension }
Once registered, you can now do this anywhere in a twig template:
{{ breakpoint() }}
Now, you can use XDebug, execution will pause, and you can see all the properties of both the Context and the Environment.
Have fun! :-D
I wrote following function to return a default representation 0 or false of a primitive or Number:
/**
* Retrieves the default value 0 / false for any primitive representative or
* {@link Number} type.
*
* @param type
*
* @return
*/
@SuppressWarnings("unchecked")
public static <T> T getDefault(final Class<T> type)
{
if (type.equals(Long.class) || type.equals(Long.TYPE))
return (T) new Long(0);
else if (type.equals(Integer.class) || type.equals(Integer.TYPE))
return (T) new Integer(0);
else if (type.equals(Double.class) || type.equals(Double.TYPE))
return (T) new Double(0);
else if (type.equals(Float.class) || type.equals(Float.TYPE))
return (T) new Float(0);
else if (type.equals(Short.class) || type.equals(Short.TYPE))
return (T) new Short((short) 0);
else if (type.equals(Byte.class) || type.equals(Byte.TYPE))
return (T) new Byte((byte) 0);
else if (type.equals(Character.class) || type.equals(Character.TYPE))
return (T) new Character((char) 0);
else if (type.equals(Boolean.class) || type.equals(Boolean.TYPE))
return (T) new Boolean(false);
else if (type.equals(BigDecimal.class))
return (T) BigDecimal.ZERO;
else if (type.equals(BigInteger.class))
return (T) BigInteger.ZERO;
else if (type.equals(AtomicInteger.class))
return (T) new AtomicInteger();
else if (type.equals(AtomicLong.class))
return (T) new AtomicLong();
else if (type.equals(DoubleAdder.class))
return (T) new DoubleAdder();
else
return null;
}
I use it in hibernate ORM projection queries when the underlying SQL query returns null instead of 0.
/**
* Retrieves the unique result or zero, <code>false</code> if it is
* <code>null</code> and represents a number
*
* @param criteria
*
* @return zero if result is <code>null</code>
*/
public static <T> T getUniqueResultDefault(final Class<T> type, final Criteria criteria)
{
final T result = (T) criteria.uniqueResult();
if (result != null)
return result;
else
return Utils.getDefault(type);
}
One of the many unnecessary complex things about Java making it unintuitive to use. Why instance variables are initialized with default 0 but local are not is not logical. Similar why enums dont have built in flag support and many more options. Java lambda is a nightmare compared to C# and not allowing class extension methods is also a big problem.
Java ecosystem comes up with excuses why its not possible but me as the user / developer i dont care about their excuses. I want easy approach and if they dont fix those things they will loose big in the future since C# and other languages are not waiting to make life of developers more simple. Its just sad to see the decline in the last 10 years since i work daily with Java.
I resolved my problem on ubunto 20.4 by reinstalling php-mysql.
Remove php-mysql:
sudo apt purge php7.2-mysql
Then install php-mysql:
sudo apt install php7.2-mysql
It will add new configurations in php.ini
It's been a while since this question was asked, hope can help. Because you can not change or alter the db structure, you can do this. according the postgresql docs.
TRUNCATE -- empty a table or set of tables.
TRUNCATE [ TABLE ] [ ONLY ] name [ * ] [, ... ]
[ RESTART IDENTITY | CONTINUE IDENTITY ] [ CASCADE | RESTRICT ]
Description
TRUNCATE quickly removes all rows from a set of tables. It has the same effect as an unqualified DELETE on each table, but since it does not actually scan the tables it is faster. Furthermore, it reclaims disk space immediately, rather than requiring a subsequent VACUUM operation. This is most useful on large tables.
Truncate the table othertable, and cascade to any tables that reference othertable via foreign-key constraints:
TRUNCATE othertable CASCADE;
The same, and also reset any associated sequence generators:
TRUNCATE bigtable, fattable RESTART IDENTITY;
Truncate and reset any associated sequence generators:
TRUNCATE revinfo RESTART IDENTITY CASCADE ;
To expand on the Wikipedia-based answers:
The Social Security Administration (SSA) explicitly states in this document that the having "000" in the first group of numbers "will NEVER be a valid SSN":
I'd consider that pretty definitive.
However, that the 2nd or 3rd groups of numbers won't be "00" or "0000" can be inferred from a FAQ that the SSA publishes which indicates that allocation of those groups starts at "01" or "0001":
But this is only a FAQ and it's never outright stated that "00" or "0000" will never be used.
In another FAQ they provide (http://www.socialsecurity.gov/employer/randomizationfaqs.html#a0=6) that "00" or "0000" will never be used.
I can't find a reference to the 'advertisement' reserved SSNs on the SSA site, but it appears that no numbers starting with a 3 digit number higher than 772 (according to the document referenced above) have been assigned yet, but there's nothing I could find that states those numbers are reserved. Wikipedia's reference is a book that I don't have access to. The Wikipedia information on the advertisement reserved numbers is mentioned across the web, but many are clearly copied from Wikipedia. I think it would be nice to have a citation from the SSA, though I suspect that now that Wikipedia has made the idea popular that these number would now have to be reserved for advertisements even if they weren't initially.
The SSA has a page with a couple of stories about SSN's they've had to retire because they were used in advertisements/samples (maybe the SSA should post a link to whatever their current policy on this might be):
I would like to share with codes whatever I understood.
The super keyword in java is a reference variable that is used to refer parent class objects. It is majorly used in the following contexts:-
1. Use of super with variables:
class Vehicle
{
int maxSpeed = 120;
}
/* sub class Car extending vehicle */
class Car extends Vehicle
{
int maxSpeed = 180;
void display()
{
/* print maxSpeed of base class (vehicle) */
System.out.println("Maximum Speed: " + super.maxSpeed);
}
}
/* Driver program to test */
class Test
{
public static void main(String[] args)
{
Car small = new Car();
small.display();
}
}
Output:-
Maximum Speed: 120
/* Base class Person */
class Person
{
void message()
{
System.out.println("This is person class");
}
}
/* Subclass Student */
class Student extends Person
{
void message()
{
System.out.println("This is student class");
}
// Note that display() is only in Student class
void display()
{
// will invoke or call current class message() method
message();
// will invoke or call parent class message() method
super.message();
}
}
/* Driver program to test */
class Test
{
public static void main(String args[])
{
Student s = new Student();
// calling display() of Student
s.display();
}
}
Output:-
This is student class
This is person class
3. Use of super with constructors:
class Person
{
Person()
{
System.out.println("Person class Constructor");
}
}
/* subclass Student extending the Person class */
class Student extends Person
{
Student()
{
// invoke or call parent class constructor
super();
System.out.println("Student class Constructor");
}
}
/* Driver program to test*/
class Test
{
public static void main(String[] args)
{
Student s = new Student();
}
}
Output:-
Person class Constructor
Student class Constructor
spark's df.write()
API will create multiple part files inside given path ... to force spark write only a single part file use df.coalesce(1).write.csv(...)
instead of df.repartition(1).write.csv(...)
as coalesce is a narrow transformation whereas repartition is a wide transformation see Spark - repartition() vs coalesce()
df.coalesce(1).write.csv(filepath,header=True)
will create folder in given filepath with one part-0001-...-c000.csv
file
use
cat filepath/part-0001-...-c000.csv > filename_you_want.csv
to have a user friendly filename
You can do:
getMap(locations.toArray(new WorldLocation[locations.size()]));
or
getMap(locations.toArray(new WorldLocation[0]));
or
getMap(new WorldLocation[locations.size()]);
@SuppressWarnings("unchecked")
is needed to remove the ide warning.
With BDD it's
@Test
public void testOrderWithBDD() {
// Given
ServiceClassA firstMock = mock(ServiceClassA.class);
ServiceClassB secondMock = mock(ServiceClassB.class);
//create inOrder object passing any mocks that need to be verified in order
InOrder inOrder = inOrder(firstMock, secondMock);
willDoNothing().given(firstMock).methodOne();
willDoNothing().given(secondMock).methodTwo();
// When
firstMock.methodOne();
secondMock.methodTwo();
// Then
then(firstMock).should(inOrder).methodOne();
then(secondMock).should(inOrder).methodTwo();
}
Using the Regex methods in .NET you should be able to match any non-numeric digit using \D, like so:
phoneNumber = Regex.Replace(phoneNumber, "\\D", String.Empty);
Using lambda:
strList.stream().map(org.apache.commons.lang3.math.NumberUtils::toInt).collect(Collectors.toList());
From the Web Designer’s Guide to PNG Image Format
PNG-8 and PNG-24
There are two PNG formats: PNG-8 and PNG-24. The numbers are shorthand for saying "8-bit PNG" or "24-bit PNG." Not to get too much into technicalities — because as a web designer, you probably don’t care — 8-bit PNGs mean that the image is 8 bits per pixel, while 24-bit PNGs mean 24 bits per pixel.
To sum up the difference in plain English: Let’s just say PNG-24 can handle a lot more color and is good for complex images with lots of color such as photographs (just like JPEG), while PNG-8 is more optimized for things with simple colors, such as logos and user interface elements like icons and buttons.
Another difference is that PNG-24 natively supports alpha transparency, which is good for transparent backgrounds. This difference is not 100% true because Adobe products’ Save for Web command allows PNG-8 with alpha transparency.
Switching to another branch in git. Straightforward answer,
git-checkout - Switch branches or restore working tree files
git fetch origin <----this will fetch the branch
git checkout branch_name <--- Switching the branch
Before switching the branch make sure you don't have any modified files, in that case, you can commit the changes or you can stash it.
Just noticed that quick search has been included into eclipse 4.13 as a built-in function by typing Ctrl+Alt+Shift+L (or Cmd+Alt+Shift+L on Mac)
https://www.eclipse.org/eclipse/news/4.13/platform.php#quick-text-search
@Melkhiah66 You can use equals method instead of '==' method to check the equality.
If you use intern() then it checks whether the object is in pool if present then returns
equal else unequal. equals method internally uses hashcode and gets you the required result.
public class Demo
{
public static void main(String[] args)
{
String str1 = "Jorman 14988611";
String str2 = new StringBuffer("Jorman").append(" 14988611").toString();
String str3 = str2.intern();
System.out.println("str1 == str2 " + (str1 == str2)); //gives false
System.out.println("str1 == str3 " + (str1 == str3)); //gives true
System.out.println("str1 equals str2 " + (str1.equals(str2))); //gives true
System.out.println("str1 equals str3 " + (str1.equals(str3))); //gives true
}
}
The answer here is Oracle only. Here's a bit more sophisticated answer in all SQL:
Who has the best overall homework result (maximum sum of homework points)?
SELECT FIRST, LAST, SUM(POINTS) AS TOTAL
FROM STUDENTS S, RESULTS R
WHERE S.SID = R.SID AND R.CAT = 'H'
GROUP BY S.SID, FIRST, LAST
HAVING SUM(POINTS) >= ALL (SELECT SUM (POINTS)
FROM RESULTS
WHERE CAT = 'H'
GROUP BY SID)
And a more difficult example, which need some explanation, for which I don't have time atm:
Give the book (ISBN and title) that is most popular in 2008, i.e., which is borrowed most often in 2008.
SELECT X.ISBN, X.title, X.loans
FROM (SELECT Book.ISBN, Book.title, count(Loan.dateTimeOut) AS loans
FROM CatalogEntry Book
LEFT JOIN BookOnShelf Copy
ON Book.bookId = Copy.bookId
LEFT JOIN (SELECT * FROM Loan WHERE YEAR(Loan.dateTimeOut) = 2008) Loan
ON Copy.copyId = Loan.copyId
GROUP BY Book.title) X
HAVING loans >= ALL (SELECT count(Loan.dateTimeOut) AS loans
FROM CatalogEntry Book
LEFT JOIN BookOnShelf Copy
ON Book.bookId = Copy.bookId
LEFT JOIN (SELECT * FROM Loan WHERE YEAR(Loan.dateTimeOut) = 2008) Loan
ON Copy.copyId = Loan.copyId
GROUP BY Book.title);
Hope this helps (anyone).. :)
Regards, Guus
Windows version (needs socat though)
C:\Program Files (x86)\Git\bin>type gitproxy.cmd
socat STDIO PROXY:proxy.mycompany.de:%1:%2,proxyport=3128
setting it up:
C:\Users\exhau\AppData\Roaming\npm>git config --global core.gitproxy gitproxy.cmd
Instead of override you can add another class to the element and then you have an extra abilities. for example:
HTML
<div class="style1 style2"></div>
CSS
//only style for the first stylesheet
.style1 {
width: 100%;
}
//only style for second stylesheet
.style2 {
width: 50%;
}
//override all
.style1.style2 {
width: 70%;
}
To get the currently logged in user:
System.getProperty("user.name"); //platform independent
and the hostname of the machine:
java.net.InetAddress localMachine = java.net.InetAddress.getLocalHost();
System.out.println("Hostname of local machine: " + localMachine.getHostName());
Move
Random pp = new Random();
int a1 = pp.nextInt(10);
TextView tv = (TextView)findViewById(R.id.tv);
tv.setText(a1);
To inside onCreate()
, and change tv.setText(a1);
to tv.setText(String.valueOf(a1));
:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Random pp = new Random();
int a1 = pp.nextInt(10);
TextView tv = (TextView)findViewById(R.id.tv);
tv.setText(String.valueOf(a1));
}
First issue: findViewById()
was called before onCreate()
, which would throw an NPE.
Second issue: Passing an int directly to a TextView calls the overloaded method that looks for a String resource (from R.string
). Therefore, we want to use String.valueOf()
to force the String
overloaded method.
This is just a sample code, but it may help you get on your way:
Public Sub testIt()
Workbooks("Workbook2").Activate
ActiveWorkbook.Sheets("Sheet2").Activate
ActiveSheet.Range("B3").Select
ActiveCell.EntireRow.Insert
End Sub
I am assuming that you can open the book (called Workbook2
in the example).
I think (but I'm not sure) you can squash all this in a single line of code:
Workbooks("Workbook2").Sheets("Sheet2").Range("B3").EntireRow.Insert
This way you won't need to activate the workbook (or sheet or cell)... Obviously, the book has to be open.
Use below code for passing string from one jsp to another jsp
A.jsp
<% String userid="Banda";%>
<form action="B.jsp" method="post">
<%
session.setAttribute("userId", userid);
%>
<input type="submit"
value="Login">
</form>
B.jsp
<%String userid = session.getAttribute("userId").toString(); %>
Hello<%=userid%>
This worked for me:
tar -zcvf target.tar.gz target/ --exclude="target/backups" --exclude="target/cache"
This is an issue distinct to Chrome, but there are two paths you can take to fix it.
I noticed the error once I added this specific header to my PHP script.
header('Content-Type: application/json');
The error appears to be related to PHP sessions when sending response headers. So according to chromium bug report 424599, this was fixed and you can just update to a newer version of Chrome. But if for some reason you can't or don't want to update, the workaround would be to remove these response headers from your PHP script if possible (that's what I did because it wasn't required).
For anyone else who came here looking to do the same with docker-compose
you just need to prepend bash -c
and enclose multiple commands in quotes, joined together with &&
.
So in the OPs example docker-compose run image bash -c "cd /path/to/somewhere && python a.py"
I use something like this function I created...
Object.getKeys = function(obj, add) {
if(obj === undefined || obj === null) {
return undefined;
}
var keys = [];
if(add !== undefined) {
keys = jQuery.merge(keys, add);
}
for(key in obj) {
if(obj.hasOwnProperty(key)) {
keys.push(key);
}
}
return keys;
};
I think you could set obj to self or something better in the first test. It seems sometimes I'm checking if it's empty too so I did it that way. Also I don't think {} is Object.* or at least there's a problem finding the function getKeys on the Object that way. Maybe you're suppose to put prototype first, but that seems to cause a conflict with GreenSock etc.
I've added some code to my project, so it's more convenient.
HtmlExtensions.cs:
namespace System.Web.Mvc.Html
{
public static class HtmlExtensions
{
public static MvcForm BeginForm(this HtmlHelper htmlHelper, string formId)
{
return htmlHelper.BeginForm(null, null, FormMethod.Post, new { id = formId });
}
public static MvcForm BeginForm(this HtmlHelper htmlHelper, string formId, FormMethod method)
{
return htmlHelper.BeginForm(null, null, method, new { id = formId });
}
}
}
MySignupForm.cshtml:
@using (Html.BeginForm("signupform"))
{
@* Some fields *@
}
In my experience switching branches in the superproject or future checkouts will still cause detached HEADs of submodules regardless if the submodule is properly added and tracked (i.e. @djacobs7 and @Johnny Z answers).
And instead of manually checking out the correct branch manually or through a script git submodule foreach can be used.
This will check the submodule config file for the branch property and checkout the set branch.
git submodule foreach -q --recursive 'branch="$(git config -f <path>.gitmodules submodule.$name.branch)"; git checkout $branch'
Alternatively you could read you csv with header=None
and then add it with df.columns
:
Cov = pd.read_csv("path/to/file.txt", sep='\t', header=None)
Cov.columns = ["Sequence", "Start", "End", "Coverage"]
When you call Business.where(:user_id => current_user.id)
you will get an array. This Array may have no objects or one or many objects in it, but it won't be null. Thus the check == nil will never be true.
You can try the following:
if Business.where(:user_id => current_user.id).count == 0
So you check the number of elements in the array and compare them to zero.
or you can try:
if Business.find_by_user_id(current_user.id).nil?
this will return one or nil.
Greedy matching. The default behavior of regular expressions is to be greedy. That means it tries to extract as much as possible until it conforms to a pattern even when a smaller part would have been syntactically sufficient.
Example:
import re
text = "<body>Regex Greedy Matching Example </body>"
re.findall('<.*>', text)
#> ['<body>Regex Greedy Matching Example </body>']
Instead of matching till the first occurrence of ‘>’, it extracted the whole string. This is the default greedy or ‘take it all’ behavior of regex.
Lazy matching, on the other hand, ‘takes as little as possible’. This can be effected by adding a ?
at the end of the pattern.
Example:
re.findall('<.*?>', text)
#> ['<body>', '</body>']
If you want only the first match to be retrieved, use the search method instead.
re.search('<.*?>', text).group()
#> '<body>'
Source: Python Regex Examples
I'm surprised no one had mentioned a simple function that takes a string and a list.
function in_list(needle, hay)
{
var i, len;
for (i = 0, len = hay.length; i < len; i++)
{
if (hay[i] == needle) { return true; }
}
return false;
}
var alist = ["test"];
console.log(in_list("test", alist));
I created nice kotlin-esqe extension functions incase anyone is interested
fun Activity.hideKeyBoard() {
val view = this.currentFocus
val methodManager = this.getSystemService(Context.INPUT_METHOD_SERVICE) as InputMethodManager
assert(view != null)
methodManager.hideSoftInputFromWindow(view!!.windowToken, InputMethodManager.HIDE_NOT_ALWAYS)
}
fun Activity.showKeyboard() {
val view = this.currentFocus
val methodManager = this.getSystemService(Context.INPUT_METHOD_SERVICE) as InputMethodManager
assert(view != null)
methodManager.showSoftInput(view, InputMethodManager.SHOW_IMPLICIT)
}
You might also try
Application.CalculateFull
or
Application.CalculateFullRebuild
if you don't mind rebuilding all open workbooks, rather than just the active worksheet. (CalculateFullRebuild
rebuilds dependencies as well.)
Try the Arrays.toString overloaded methods.
Or else, try this below generic implementation:
public static void main(String... args) throws Exception {
String[] array = {"ABC", "XYZ", "PQR"};
System.out.println(new Test().join(array, ", "));
}
public <T> String join(T[] array, String cement) {
StringBuilder builder = new StringBuilder();
if(array == null || array.length == 0) {
return null;
}
for (T t : array) {
builder.append(t).append(cement);
}
builder.delete(builder.length() - cement.length(), builder.length());
return builder.toString();
}
I used the following two steps which I found in the comments/posts linked in the other answers:
Step one: Convert the x.509 cert and key to a pkcs12 file
openssl pkcs12 -export -in server.crt -inkey server.key \
-out server.p12 -name [some-alias] \
-CAfile ca.crt -caname root
Note: Make sure you put a password on the pkcs12 file - otherwise you'll get a null pointer exception when you try to import it. (In case anyone else had this headache). (Thanks jocull!)
Note 2: You might want to add the -chain
option to preserve the full certificate chain. (Thanks Mafuba)
Step two: Convert the pkcs12 file to a Java keystore
keytool -importkeystore \
-deststorepass [changeit] -destkeypass [changeit] -destkeystore server.keystore \
-srckeystore server.p12 -srcstoretype PKCS12 -srcstorepass some-password \
-alias [some-alias]
Finished
OPTIONAL Step zero: Create self-signed certificate
openssl genrsa -out server.key 2048
openssl req -new -out server.csr -key server.key
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
Cheers!
Upload sql file
Make sure to select your database in this award winning GUI:
If you look at the code of %in%
function (x, table) match(x, table, nomatch = 0L) > 0L
then you should be able to write your version of opposite. I use
`%not in%` <- function (x, table) is.na(match(x, table, nomatch=NA_integer_))
Another way is:
function (x, table) match(x, table, nomatch = 0L) == 0L
@Nat gave a good answer. But since there is no way to shorten a code, why not use contatenate to 'generate' the code you need. It works for me when I'm lazy (at typing the whole code in the cell).
So what we need is just identify the pattern > use excel to built the pattern 'structure' > add " = " and paste it in the intended cell.
For example, you want to achieve (i mean, enter in the cell) :
=IF('testsheet'!$C$1 <= 99,'testsheet'!$A$1,"") &IF('testsheet'!$C$2 <= 99,'testsheet'!$A$2,"") &IF('testsheet'!$C$3 <= 99,'testsheet'!$A$3,"") &IF('testsheet'!$C$4 <= 99,'testsheet'!$A$4,"") &IF('testsheet'!$C$5 <= 99,'testsheet'!$A$5,"") &IF('testsheet'!$C$6 <= 99,'testsheet'!$A$6,"") &IF('testsheet'!$C$7 <= 99,'testsheet'!$A$7,"") &IF('testsheet'!$C$8 <= 99,'testsheet'!$A$8,"") &IF('testsheet'!$C$9 <= 99,'testsheet'!$A$9,"") &IF('testsheet'!$C$10 <= 99,'testsheet'!$A$10,"") &IF('testsheet'!$C$11 <= 99,'testsheet'!$A$11,"") &IF('testsheet'!$C$12 <= 99,'testsheet'!$A$12,"") &IF('testsheet'!$C$13 <= 99,'testsheet'!$A$13,"") &IF('testsheet'!$C$14 <= 99,'testsheet'!$A$14,"") &IF('testsheet'!$C$15 <= 99,'testsheet'!$A$15,"") &IF('testsheet'!$C$16 <= 99,'testsheet'!$A$16,"") &IF('testsheet'!$C$17 <= 99,'testsheet'!$A$17,"") &IF('testsheet'!$C$18 <= 99,'testsheet'!$A$18,"") &IF('testsheet'!$C$19 <= 99,'testsheet'!$A$19,"") &IF('testsheet'!$C$20 <= 99,'testsheet'!$A$20,"") &IF('testsheet'!$C$21 <= 99,'testsheet'!$A$21,"") &IF('testsheet'!$C$22 <= 99,'testsheet'!$A$22,"") &IF('testsheet'!$C$23 <= 99,'testsheet'!$A$23,"") &IF('testsheet'!$C$24 <= 99,'testsheet'!$A$24,"") &IF('testsheet'!$C$25 <= 99,'testsheet'!$A$25,"") &IF('testsheet'!$C$26 <= 99,'testsheet'!$A$26,"") &IF('testsheet'!$C$27 <= 99,'testsheet'!$A$27,"") &IF('testsheet'!$C$28 <= 99,'testsheet'!$A$28,"") &IF('testsheet'!$C$29 <= 99,'testsheet'!$A$29,"") &IF('testsheet'!$C$30 <= 99,'testsheet'!$A$30,"") &IF('testsheet'!$C$31 <= 99,'testsheet'!$A$31,"") &IF('testsheet'!$C$32 <= 99,'testsheet'!$A$32,"") &IF('testsheet'!$C$33 <= 99,'testsheet'!$A$33,"") &IF('testsheet'!$C$34 <= 99,'testsheet'!$A$34,"") &IF('testsheet'!$C$35 <= 99,'testsheet'!$A$35,"") &IF('testsheet'!$C$36 <= 99,'testsheet'!$A$36,"") &IF('testsheet'!$C$37 <= 99,'testsheet'!$A$37,"") &IF('testsheet'!$C$38 <= 99,'testsheet'!$A$38,"") &IF('testsheet'!$C$39 <= 99,'testsheet'!$A$39,"") &IF('testsheet'!$C$40 <= 99,'testsheet'!$A$40,"")
I didn't type it, I just use "&" symbol to combine arranged cell in excel (another file, not the file we are working on).
Notice that :
part1 > IF('testsheet'!$C$
part2 > 1 to 40
part3 > <= 99,'testsheet'!$A$
part4 > 1 to 40
part5 > ,"") &
Now select A2:I2 , and drag it down. Notice that the number did the increament per row added, and the generated text is combined, cell by cell and line by line.
Result = code as you intended.
I've use excel/OpenOfficeCalc to help me generate code for my projects. Works for me, hope it helps for others. (:
You should use adb shell getprop
command and grep
specific info about your current device, For additional information you can read documentation:
Android Debug Bridge documentation
I added some examples below:
language - adb shell getprop | grep language
[persist.sys.language]: [en]
[ro.product.locale.language]: [en]
boot complete ( device ready after reset) - adb shell getprop | grep boot_completed
[sys.boot_completed]: [1]
device model - adb shell getprop | grep model
[ro.product.model]: [Nexus 4]
sdk version - adb shell getprop | grep sdk
[ro.build.version.sdk]: [22]
time zone - adb shell getprop | grep timezone
[persist.sys.timezone]: [Asia/China]
serial number - adb shell getprop | grep serialno
[ro.boot.serialno]: [1234567]
(Ctrl+H) -> Find and Replace window opens -> Find what "0" ,Replace with " "(leave it blank )-> click options tick match entire cell contents -> click "Replace All" button .
This may be helpful while searching keys present in nested objects and nested arrays. And this is a generic solution to all cases.
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
public class MyClass
{
public static Object finalresult = null;
public static void main(String args[]) throws JSONException
{
System.out.println(myfunction(myjsonstring,key));
}
public static Object myfunction(JSONObject x,String y) throws JSONException
{
JSONArray keys = x.names();
for(int i=0;i<keys.length();i++)
{
if(finalresult!=null)
{
return finalresult; //To kill the recursion
}
String current_key = keys.get(i).toString();
if(current_key.equals(y))
{
finalresult=x.get(current_key);
return finalresult;
}
if(x.get(current_key).getClass().getName().equals("org.json.JSONObject"))
{
myfunction((JSONObject) x.get(current_key),y);
}
else if(x.get(current_key).getClass().getName().equals("org.json.JSONArray"))
{
for(int j=0;j<((JSONArray) x.get(current_key)).length();j++)
{
if(((JSONArray) x.get(current_key)).get(j).getClass().getName().equals("org.json.JSONObject"))
{
myfunction((JSONObject)((JSONArray) x.get(current_key)).get(j),y);
}
}
}
}
return null;
}
}
Possibilities:
Logic :
If you want to find the control from code behind you have to use runat="server"
attribute on control. And then you can use Control.FindControl
.
<div class="tab-pane active" id="portlet_tab1" runat="server">
Control myControl1 = FindControl("portlet_tab1");
if(myControl1!=null)
{
//do stuff
}
If you use runat server and your control is inside the ContentPlaceHolder
you have to know the ctrl name would not be portlet_tab1 anymore. It will render with the ctrl00 format.
Something like: #ctl00_ContentPlaceHolderMain_portlet_tab1. You will have to modify name if you use jquery.
You can also do it using jQuery on client side without using the runat-server attribute:
<script type='text/javascript'>
$("#portlet_tab1").removeClass("Active");
</script>
For those who have problems with staggeredLayoutManager (like https://imgur.com/XVutH5u)
recyclerView's methods:
getChildAdapterPosition(view)
getChildLayoutPosition(view)
sometimes return -1 as index so we might face troubles setting itemDecor. My solution is to override deprecated ItemDecoration's method:
public void getItemOffsets(Rect outRect, int itemPosition, RecyclerView parent)
instead of the newbie:
public void getItemOffsets(Rect outRect, View view, RecyclerView parent, State state)
like this:
recyclerView.addItemDecoration(new RecyclerView.ItemDecoration() {
@Override
public void getItemOffsets(Rect outRect, int itemPosition, RecyclerView parent) {
TheAdapter.VH vh = (TheAdapter.VH) recyclerView.findViewHolderForAdapterPosition(itemPosition);
View itemView = vh.itemView; //itemView is the base view of viewHolder
//or instead of the 2 lines above maybe it's possible to use View itemView = layoutManager.findViewByPosition(itemPosition) ... NOT TESTED
StaggeredGridLayoutManager.LayoutParams itemLayoutParams = (StaggeredGridLayoutManager.LayoutParams) itemView.getLayoutParams();
int spanIndex = itemLayoutParams.getSpanIndex();
if (spanIndex == 0)
...
else
...
}
});
Seems to work for me so far :)
I use tmux for a multiple window/pane development environment on remote hosts. It's really simple to detach and keep the process running in the background. Have a look at tmux
Like that:
$ command >>output 2>>error
For the record, the spring.jpa.hibernate.ddl-auto
property is Spring Data JPA specific and is their way to specify a value that will eventually be passed to Hibernate under the property it knows, hibernate.hbm2ddl.auto
.
The values create
, create-drop
, validate
, and update
basically influence how the schema tool management will manipulate the database schema at startup.
For example, the update
operation will query the JDBC driver's API to get the database metadata and then Hibernate compares the object model it creates based on reading your annotated classes or HBM XML mappings and will attempt to adjust the schema on-the-fly.
The update
operation for example will attempt to add new columns, constraints, etc but will never remove a column or constraint that may have existed previously but no longer does as part of the object model from a prior run.
Typically in test case scenarios, you'll likely use create-drop
so that you create your schema, your test case adds some mock data, you run your tests, and then during the test case cleanup, the schema objects are dropped, leaving an empty database.
In development, it's often common to see developers use update
to automatically modify the schema to add new additions upon restart. But again understand, this does not remove a column or constraint that may exist from previous executions that is no longer necessary.
In production, it's often highly recommended you use none
or simply don't specify this property. That is because it's common practice for DBAs to review migration scripts for database changes, particularly if your database is shared across multiple services and applications.
Yes , You can use conditional statement with EJS like if else , ternary operator or even switch case also
For Example
Ternary operator :
<%- role == 'Admin' ? 'Super Admin' : role == 'subAdmin' ? 'Sub Admin' : role %>
Switch Case
<% switch (role) {
case 'Admin' : %>
Super Admin
<% break;
case 'eventAdmin' : %>
Event Admin
<% break;
case 'subAdmin' : %>
Sub Admin
<% break;
} %>
mail can represent quite a couple of programs on a linux system. What you want behind it is either sendmail or postfix. I recommend the latter.
You can install it via your favorite package manager. Then you have to configure it, and once you have done that, you can send email like this:
echo "My message" | mail -s subject [email protected]
See the manual for more information.
As far as configuring postfix goes, there's plenty of articles on the internet on how to do it. Unless you're on a public server with a registered domain, you generally want to forward the email to a SMTP server that you can send email from.
For gmail, for example, follow http://rtcamp.com/tutorials/linux/ubuntu-postfix-gmail-smtp/ or any other similar tutorial.
Well, you're on the right path, Benno!
There are some tips regarding VBA programming that might help you out.
Use always explicit references to the sheet you want to interact with. Otherwise, Excel may 'assume' your code applies to the active sheet and eventually you'll see it screws your spreadsheet up.
As lionz mentioned, get in touch with the native methods Excel offers. You might use them on most of your tricks.
Explicitly declare your variables... they'll show the list of methods each object offers in VBA. It might save your time digging on the internet.
Now, let's have a draft code...
Remember this code must be within the Excel Sheet object, as explained by lionz. It only applies to Sheet 2, is up to you to adapt it to both Sheet 2 and Sheet 3 in the way you prefer.
Hope it helps!
Private Sub Worksheet_Change(ByVal Target As Range)
Dim oSheet As Excel.Worksheet
'We only want to do something if the changed cell is B6, right?
If Target.Address = "$B$6" Then
'Checks if it's a number...
If IsNumeric(Target.Value) Then
'Let's avoid values out of your bonds, correct?
If Target.Value > 0 And Target.Value < 51 Then
'Let's assign the worksheet we'll show / hide rows to one variable and then
' use only the reference to the variable itself instead of the sheet name.
' It's safer.
'You can alternatively replace 'sheet 2' by 2 (without quotes) which will represent
' the sheet index within the workbook
Set oSheet = ActiveWorkbook.Sheets("Sheet 2")
'We'll unhide before hide, to ensure we hide the correct ones
oSheet.Range("A7:A56").EntireRow.Hidden = False
oSheet.Range("A" & Target.Value + 7 & ":A56").EntireRow.Hidden = True
End If
End If
End If
End Sub
Thread
is a lower-level concept: if you're directly starting a thread, you know it will be a separate thread, rather than executing on the thread pool etc.
Task
is more than just an abstraction of "where to run some code" though - it's really just "the promise of a result in the future". So as some different examples:
Task.Delay
doesn't need any actual CPU time; it's just like setting a timer to go off in the futureWebClient.DownloadStringTaskAsync
won't take much CPU time locally; it's representing a result which is likely to spend most of its time in network latency or remote work (at the web server)Task.Run()
really is saying "I want you to execute this code separately"; the exact thread on which that code executes depends on a number of factors.Note that the Task<T>
abstraction is pivotal to the async support in C# 5.
In general, I'd recommend that you use the higher level abstraction wherever you can: in modern C# code you should rarely need to explicitly start your own thread.
I finally got this working in my own project (which happens to be in Xcode). Adding these 3 scripts to your build phase will automatically create a Disk Image for your product that is nice and neat. All you have to do is build your project and the DMG will be waiting in your products folder.
Script 1 (Create Temp Disk Image):
#!/bin/bash
#Create a R/W DMG
dir="$TEMP_FILES_DIR/disk"
dmg="$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.temp.dmg"
rm -rf "$dir"
mkdir "$dir"
cp -R "$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.app" "$dir"
ln -s "/Applications" "$dir/Applications"
mkdir "$dir/.background"
cp "$PROJECT_DIR/$PROJECT_NAME/some_image.png" "$dir/.background"
rm -f "$dmg"
hdiutil create "$dmg" -srcfolder "$dir" -volname "$PRODUCT_NAME" -format UDRW
#Mount the disk image, and store the device name
hdiutil attach "$dmg" -noverify -noautoopen -readwrite
Script 2 (Set Window Properties Script):
#!/usr/bin/osascript
#get the dimensions of the main window using a bash script
set {width, height, scale} to words of (do shell script "system_profiler SPDisplaysDataType | awk '/Main Display: Yes/{found=1} /Resolution/{width=$2; height=$4} /Retina/{scale=($2 == \"Yes\" ? 2 : 1)} /^ {8}[^ ]+/{if(found) {exit}; scale=1} END{printf \"%d %d %d\\n\", width, height, scale}'")
set x to ((width / 2) / scale)
set y to ((height / 2) / scale)
#get the product name using a bash script
set {product_name} to words of (do shell script "printf \"%s\", $PRODUCT_NAME")
set background to alias ("Volumes:"&product_name&":.background:some_image.png")
tell application "Finder"
tell disk product_name
open
set current view of container window to icon view
set toolbar visible of container window to false
set statusbar visible of container window to false
set the bounds of container window to {x, y, (x + 479), (y + 383)}
set theViewOptions to the icon view options of container window
set arrangement of theViewOptions to not arranged
set icon size of theViewOptions to 128
set background picture of theViewOptions to background
set position of item (product_name & ".app") of container window to {100, 225}
set position of item "Applications" of container window to {375, 225}
update without registering applications
close
end tell
end tell
The above measurement for the window work for my project specifically due to the size of my background pic and icon resolution; you may need to modify these values for your own project.
Script 3 (Make Final Disk Image Script):
#!/bin/bash
dir="$TEMP_FILES_DIR/disk"
cp "$PROJECT_DIR/$PROJECT_NAME/some_other_image.png" "$dir/"
#unmount the temp image file, then convert it to final image file
sync
sync
hdiutil detach /Volumes/$PRODUCT_NAME
rm -f "$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.dmg"
hdiutil convert "$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.temp.dmg" -format UDZO -imagekey zlib-level=9 -o "$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.dmg"
rm -f "$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.temp.dmg"
#Change the icon of the image file
sips -i "$dir/some_other_image.png"
DeRez -only icns "$dir/some_other_image.png" > "$dir/tmpicns.rsrc"
Rez -append "$dir/tmpicns.rsrc" -o "$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.dmg"
SetFile -a C "$BUILT_PRODUCTS_DIR/$PRODUCT_NAME.dmg"
rm -rf "$dir"
Make sure the image files you are using are in the $PROJECT_DIR/$PROJECT_NAME/ directory!
Pros
Cons
You can use reset_index
to turn the index back into a column:
monthly_mean.reset_index().plot(x='index', y='A')
1.find table "dbo._EFMigrationsHistory", then delete the migration record that you want to remove. 2. run "remove-migration" in PM(Package Manager Console). Works for me.
A native javascript implementation of WunderBart's answer.
function onClick() {
// create invisible dummy input to receive the focus first
const fakeInput = document.createElement('input')
fakeInput.setAttribute('type', 'text')
fakeInput.style.position = 'absolute'
fakeInput.style.opacity = 0
fakeInput.style.height = 0
fakeInput.style.fontSize = '16px' // disable auto zoom
// you may need to append to another element depending on the browser's auto
// zoom/scroll behavior
document.body.prepend(fakeInput)
// focus so that subsequent async focus will work
fakeInput.focus()
setTimeout(() => {
// now we can focus on the target input
targetInput.focus()
// cleanup
fakeInput.remove()
}, 1000)
}
Other References: Disable Auto Zoom in Input "Text" tag - Safari on iPhone
While @Eli is quite correct that there usually isn't much of a need to do it, it is possible. savefig
takes a bbox_inches
argument that can be used to selectively save only a portion of a figure to an image.
Here's a quick example:
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
# Make an example plot with two subplots...
fig = plt.figure()
ax1 = fig.add_subplot(2,1,1)
ax1.plot(range(10), 'b-')
ax2 = fig.add_subplot(2,1,2)
ax2.plot(range(20), 'r^')
# Save the full figure...
fig.savefig('full_figure.png')
# Save just the portion _inside_ the second axis's boundaries
extent = ax2.get_window_extent().transformed(fig.dpi_scale_trans.inverted())
fig.savefig('ax2_figure.png', bbox_inches=extent)
# Pad the saved area by 10% in the x-direction and 20% in the y-direction
fig.savefig('ax2_figure_expanded.png', bbox_inches=extent.expanded(1.1, 1.2))
The full figure:
Area inside the second subplot:
Area around the second subplot padded by 10% in the x-direction and 20% in the y-direction:
Put this in your MainActivity:
{
public EditText bizname, storeno, rcpt, item, price, tax, total;
public Button click, click2;
int contentView;
protected void onCreate(Bundle savedInstanceState) {
super.onCreate( savedInstanceState );
setContentView( R.layout.main_activity );
bizname = (EditText) findViewById( R.id.editBizName );
item = (EditText) findViewById( R.id.editItem );
price = (EditText) findViewById( R.id.editPrice );
tax = (EditText) findViewById( R.id.editTax );
total = (EditText) findViewById( R.id.editTotal );
click = (Button) findViewById( R.id.button );
}
}
Put this under a button or something
public void clickBusiness(View view) {
checkPermsOfStorage( this );
bizname = (EditText) findViewById( R.id.editBizName );
item = (EditText) findViewById( R.id.editItem );
price = (EditText) findViewById( R.id.editPrice );
tax = (EditText) findViewById( R.id.editTax );
total = (EditText) findViewById( R.id.editTotal );
String x = ("\nItem/Price: " + item.getText() + price.getText() + "\nTax/Total" + tax.getText() + total.getText());
Toast.makeText( this, x, Toast.LENGTH_SHORT ).show();
try {
this.WriteBusiness(bizname,storeno,rcpt,item,price,tax,total);
String vv = tax.getText().toString();
System.console().printf( "%s", vv );
//new XMLDivisionWriter(getString(R.string.SDDoc) + "/tax_div_business.xml");
} catch (ReflectiveOperationException e) {
e.printStackTrace();
}
}
There! The debate is settled!
Above all are correct. Further if you do not want others to create sub classes from your class, then declare your class as final. Then it becomes the leaf level of your class tree hierarchy that no one can extend it further. It is a good practice to avoid huge hierarchy of classes.
In many cases,
display:inline;
is enough.
But in some cases, you have to add following:
clear:none;
Here you go:
SELECT Field1, COUNT(Field1)
FROM Table1
GROUP BY Field1
HAVING COUNT(Field1) > 1
ORDER BY Field1 desc
Looking at posted answers I think this code would be also an alternative for someone. Nobody above used .Shapes.AddPicture
in their code, only .Pictures.Insert()
Dim myPic As Object
Dim picpath As String
picpath = "C:\Users\photo.jpg" 'example photo path
Set myPic = ws.Shapes.AddPicture(picpath, False, True, 20, 20, -1, -1)
With myPic
.Width = 25
.Height = 25
.Top = xlApp.Cells(i, 20).Top 'according to variables from correct answer
.Left = xlApp.Cells(i, 20).Left
.LockAspectRatio = msoFalse
End With
I'm working in Excel 2013. Also realized that You need to fill all the parameters in .AddPicture
, because of error "Argument not optional". Looking at this You may ask why I set Height
and Width
as -1, but that doesn't matter cause of those parameters are set underneath between With
brackets.
Hope it may be also useful for someone :)
One can also use RLIKE
as below
SELECT * FROM artists WHERE name RLIKE '^[abc]';
Best answer doesn't work for me. I needed ssh://
from the beggining.
# does not work
git remote set-url origin [email protected]:10000/aaa/bbbb/ccc.git
# work
git remote set-url origin ssh://[email protected]:10000/aaa/bbbb/ccc.git
I searched for a solution to parse very large files (gbs) line by line using a stream. All the third-party libraries and examples did not suit my needs since they processed the files not line by line (like 1 , 2 , 3 , 4 ..) or read the entire file to memory
The following solution can parse very large files, line by line using stream & pipe. For testing I used a 2.1 gb file with 17.000.000 records. Ram usage did not exceed 60 mb.
First, install the event-stream package:
npm install event-stream
Then:
var fs = require('fs')
, es = require('event-stream');
var lineNr = 0;
var s = fs.createReadStream('very-large-file.csv')
.pipe(es.split())
.pipe(es.mapSync(function(line){
// pause the readstream
s.pause();
lineNr += 1;
// process line here and call s.resume() when rdy
// function below was for logging memory usage
logMemoryUsage(lineNr);
// resume the readstream, possibly from a callback
s.resume();
})
.on('error', function(err){
console.log('Error while reading file.', err);
})
.on('end', function(){
console.log('Read entire file.')
})
);
Please let me know how it goes!
Additionally if you want to know which extensions are available on your server: SELECT * FROM pg_available_extensions
Here's a function I made based on previous replies: it works on my machine but YMMV.
/**
* @description Converts a string response to an array of objects.
* @param {string} string - The string you want to convert.
* @returns {array} - an array of objects.
*/
function stringToJson(input) {
var result = [];
//replace leading and trailing [], if present
input = input.replace(/^\[/,'');
input = input.replace(/\]$/,'');
//change the delimiter to
input = input.replace(/},{/g,'};;;{');
// preserve newlines, etc - use valid JSON
//https://stackoverflow.com/questions/14432165/uncaught-syntaxerror-unexpected-token-with-json-parse
input = input.replace(/\\n/g, "\\n")
.replace(/\\'/g, "\\'")
.replace(/\\"/g, '\\"')
.replace(/\\&/g, "\\&")
.replace(/\\r/g, "\\r")
.replace(/\\t/g, "\\t")
.replace(/\\b/g, "\\b")
.replace(/\\f/g, "\\f");
// remove non-printable and other non-valid JSON chars
input = input.replace(/[\u0000-\u0019]+/g,"");
input = input.split(';;;');
input.forEach(function(element) {
// console.log(JSON.stringify(element));
result.push(JSON.parse(element));
}, this);
return result;
}
If you are using string variables you can format the string directly using a :
then specify the format (e.g. N0, P2, etc).
decimal Number = 2000.55512016465m;
$"{Number:N}" #Outputs 2,000.55512016465
You can also specify the number of decimal places to show by adding a number to the end like
$"{Number:N1}" #Outputs 2,000.5
$"{Number:N2}" #Outputs 2,000.55
$"{Number:N3}" #Outputs 2,000.555
$"{Number:N4}" #Outputs 2,000.5551
In visual studio.
If no errors, you should be able to see the service reference in the object browser and all related methods.
I was able to solve this using local_action to scp to file from machineA to machineC and then copying the file to machineB.
Here is an example to find "all" occurrences of 10 along with the index. Thought this would be of some help.
void _find_all_test()
{
vector<int> ints;
int val;
while(cin >> val) ints.push_back(val);
vector<int>::iterator it;
it = ints.begin();
int count = ints.size();
do
{
it = find(it,ints.end(), 10);//assuming 10 as search element
cout << *it << " found at index " << count -(ints.end() - it) << endl;
}while(++it != ints.end());
}
What about like this?
public bool OptionalOutParamMethod([Optional] ref string pOutParam)
{
return true;
}
You still have to pass a value to the parameter from C# but it is an optional ref param.
Each data type is capable of overloading each operator. If both the numerator and the denominator are integers, the integer type will perform the division operation and it will return an integer type. If you want floating point division, you must cast one or more of the number to floating point types before dividing them. For instance:
int x = 13;
int y = 4;
float x = (float)y / (float)z;
or, if you are using literals:
float x = 13f / 4f;
Keep in mind, floating points are not precise. If you care about precision, use something like the decimal type, instead.
A different approach ... here is an example: If the user has a delay of 600-1000ms when is typing you may consider he's stopped.
myEditText.addTextChangedListener(new TextWatcher() {_x000D_
_x000D_
private String s;_x000D_
private long after;_x000D_
private Thread t;_x000D_
private Runnable runnable_EditTextWatcher = new Runnable() {_x000D_
@Override_x000D_
public void run() {_x000D_
while (true) {_x000D_
if ((System.currentTimeMillis() - after) > 600)_x000D_
{_x000D_
Log.d("Debug_EditTEXT_watcher", "(System.currentTimeMillis()-after)>600 -> " + (System.currentTimeMillis() - after) + " > " + s);_x000D_
// Do your stuff_x000D_
t = null;_x000D_
break;_x000D_
}_x000D_
}_x000D_
}_x000D_
};_x000D_
_x000D_
@Override_x000D_
public void onTextChanged(CharSequence ss, int start, int before, int count) {_x000D_
s = ss.toString();_x000D_
}_x000D_
_x000D_
@Override_x000D_
public void beforeTextChanged(CharSequence s, int start, int count, int after) {_x000D_
}_x000D_
_x000D_
@Override_x000D_
public void afterTextChanged(Editable ss) {_x000D_
after = System.currentTimeMillis();_x000D_
if (t == null)_x000D_
{_x000D_
t = new Thread(runnable_EditTextWatcher);_x000D_
t.start();_x000D_
}_x000D_
}_x000D_
});
_x000D_
You can convert a datetime object to a date with the date() method of the date time object, as follows:
<datetime_object>.date()
Use the start and end delimiters: ^abc$
This worked for me, within an ASP.NET MVC3 site where I'd left the framework to setup unobtrusive validation etc., in case it's useful to anyone:
$("form").data("validator").settings.ignore = "";
Question 1: Use .delegate
on the div to bind a click handler to the button.
Question 2: Use $(this).val()
or this.value
(the latter would be faster) inside of the click handler. this
will refer to the button.
$("#pg_menu_content").on('click', '#btn_a', function () {
alert($(this).val());
});
$div = $('<div data-role="fieldcontain"/>');
$("<input type='button' value='Dynamic Button' id='btn_a' />").appendTo($div.clone()).appendTo('#pg_menu_content');
When testing for directories remember that every directory contains two special files.
One is called '.' and the other '..'
. is the directory's own name while .. is the name of it's parent directory.
To avoid trailing backslash problems just test to see if the directory knows it's own name.
eg:
if not exist %temp%\buffer\. mkdir %temp%\buffer
The link mentioned by Subimage was right on the money for me. It suggested changing the virtual host tag, ie, from <VirtualHost myserver.example.com:443>
to <VirtualHost _default_:443>
Error code:
ssl_error_rx_record_too_long
This usually means the implementation of SSL on your server is not correct. The error is usually caused by a server side problem which the server administrator will need to investigate.
Below are some things we recommend trying.
Ensure that port 443 is open and enabled on your server. This is the standard port for https communications.
If SSL is using a non-standard port then FireFox 3 can sometimes give this error. Ensure SSL is running on port 443.
If using Apache2 check that you are using port 443 for SSL. This can be done by setting the ports.conf file as follows
Listen 80 Listen 443 https
Make sure you do not have more than one SSL certificate sharing the same IP. Please ensure that all SSL certificates utilise their own dedicated IP.
If using Apache2 check your vhost config. Some users have reported changing
<VirtualHost>
to_default_
resolved the error.
That fixed my problem. It's rare that I google an error message and get the first hit with the right answer! :-)
In addition to the above, these are some other solutions that other folks have found were causing the issue:
Make sure that your SSL certificate is not expired
Try to specify the Cipher:
SSLCipherSuite ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM:+SSLv3
Local variables are still parsed for the line that ENDLOCAL uses. This allows for tricks like:
ENDLOCAL & SET MYGLOBAL=%SOMELOCAL% & SET MYOTHERGLOBAL=%SOMEOTHERLOCAL%
This is is a useful way to transmit results to the calling context. Specifically, %SOMELOCAL% goes out of scope as soon as ENDLOCAL completes, but by then %SOMELOCAL% is already expanded, so the MYGLOBAL is assigned in the calling context with the local variable.
For the same reason, if you decide to do:
ENDLOCAL & SET MYLOCAL=%MYLOCAL%
You'll discover your new MYLOCAL variable is actually now around as a regular environment variable instead of the localized variable you may have intended it to be.
Doesn't need to. RequestMapping annotation supports wildcards and ant-style paths. Also looks like you just want a default view, so you can put
<mvc:view-controller path="/" view-name="welcome"/>
in your config file. That will forward all requests to the Root to the welcome view.
Use WebRequest. From Scott Hanselman:
public static string HttpPost(string URI, string Parameters)
{
System.Net.WebRequest req = System.Net.WebRequest.Create(URI);
req.Proxy = new System.Net.WebProxy(ProxyString, true);
//Add these, as we're doing a POST
req.ContentType = "application/x-www-form-urlencoded";
req.Method = "POST";
//We need to count how many bytes we're sending.
//Post'ed Faked Forms should be name=value&
byte [] bytes = System.Text.Encoding.ASCII.GetBytes(Parameters);
req.ContentLength = bytes.Length;
System.IO.Stream os = req.GetRequestStream ();
os.Write (bytes, 0, bytes.Length); //Push it out there
os.Close ();
System.Net.WebResponse resp = req.GetResponse();
if (resp== null) return null;
System.IO.StreamReader sr =
new System.IO.StreamReader(resp.GetResponseStream());
return sr.ReadToEnd().Trim();
}
This can also be done in later versions of handlebars using the key=value
notation:
{{> mypartial foo='bar' }}
Allowing you to pass specific values to your partial context.
Reference: Context different for partial #182
It's simply the domain that your "facebook" application (wich means application visible on facebook but hosted on the website www.xyz.com) will be hosted. So you can put App Domain = www.xyz.com
^ is binary (as in base-2) xor, not exponentiation (which is not available as a Java operator). For exponentiation, see java.lang.Math.pow().
alignment-baseline
is not the right attribute to use here. The correct answer is to use a combination of dominant-baseline="central"
and text-anchor="middle"
:
<svg width="200" height="100">_x000D_
<g>_x000D_
<rect x="0" y="0" width="200" height="100" style="stroke:red; stroke-width:3px; fill:white;"/>_x000D_
<text x="50%" y="50%" style="dominant-baseline:central; text-anchor:middle; font-size:40px;">TEXT</text>_x000D_
</g>_x000D_
</svg>
_x000D_
While AngularJS allows you to get a hand on a click event (and thus a target of it) with the following syntax (note the $event
argument to the setMaster
function; documentation here: http://docs.angularjs.org/api/ng.directive:ngClick):
function AdminController($scope) {
$scope.setMaster = function(obj, $event){
console.log($event.target);
}
}
this is not very angular-way of solving this problem. With AngularJS the focus is on the model manipulation. One would mutate a model and let AngularJS figure out rendering.
The AngularJS-way of solving this problem (without using jQuery and without the need to pass the $event
argument) would be:
<div ng-controller="AdminController">
<ul class="list-holder">
<li ng-repeat="section in sections" ng-class="{active : isSelected(section)}">
<a ng-click="setMaster(section)">{{section.name}}</a>
</li>
</ul>
<hr>
{{selected | json}}
</div>
where methods in the controller would look like this:
$scope.setMaster = function(section) {
$scope.selected = section;
}
$scope.isSelected = function(section) {
return $scope.selected === section;
}
Here is the complete jsFiddle: http://jsfiddle.net/pkozlowski_opensource/WXJ3p/15/
Use the dist()
function, but you need to form a matrix from the two inputs for the first argument to dist()
:
dist(rbind(x1, x2))
For the input in the OP's question we get:
> dist(rbind(x1, x2))
x1
x2 7.94821
a single value that is the Euclidean distance between x1
and x2
.
You should look at the File method of the Controller. This is exactly what it's for. It returns a FilePathResult instead of an ActionResult.
An easy way of doing it is using this :)
var word = "waffle"
word.endsWith("e")
class Countries < ActiveRecord::Base
self.table_name = "cc"
end
class Countries < ActiveRecord::Base
self.set_table_name "cc"
...
end
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%'
IDENTIFIED BY 'YOUR_PASS'
WITH GRANT OPTION;
FLUSH PRIVILEGES;
*.* = DB.TABLE
you can restrict user to specific database and specific table.
'root'@'%'
you can change root with any user you created and % is to allow all IP. You can restrict it by changing %.168.1.1 etc too.
If that doesn't resolve, then also modify my.cnf or my.ini and comment these lines
bind-address = 127.0.0.1
to #bind-address = 127.0.0.1
and
skip-networking
to #skip-networking
Raspberry Pi, I found bind-address configuration under \etc\mysql\mariadb.conf.d\50-server.cnf
This procedure works even if ADB is not available.
We can also try this solution
ApplicationContext ctx = new ClassPathXmlApplicationContext("classpath*:app-context.xml");
in this the spring automatically finds the class in the class path itself